Page MenuHomeVyOS Platform

Wan Load Balancing - Can't create routing tables
Closed, ResolvedPublicBUG

Description

Tested on VyOS 1.4-rolling-202204130521

Commands for Wan Load Balancing:

set interfaces ethernet eth0 address 'dhcp'
set interfaces ethernet eth1 address 'dhcp'
set interfaces ethernet eth2 address 'dhcp'
set interfaces ethernet eth3 address '192.168.250.1/24'
set load-balancing wan interface-health eth0 failure-count '5'
set load-balancing wan interface-health eth0 nexthop 'dhcp'
set load-balancing wan interface-health eth0 success-count '1'
set load-balancing wan interface-health eth0 test 10 resp-time '5'
set load-balancing wan interface-health eth0 test 10 target '1.1.1.1'
set load-balancing wan interface-health eth0 test 10 ttl-limit '1'
set load-balancing wan interface-health eth0 test 10 type 'ttl'
set load-balancing wan interface-health eth0 test 20 resp-time '5'
set load-balancing wan interface-health eth0 test 20 target '1.0.0.1'
set load-balancing wan interface-health eth0 test 20 ttl-limit '1'
set load-balancing wan interface-health eth0 test 20 type 'ping'
set load-balancing wan interface-health eth1 failure-count '5'
set load-balancing wan interface-health eth1 nexthop 'dhcp'
set load-balancing wan interface-health eth1 success-count '1'
set load-balancing wan interface-health eth1 test 10 resp-time '5'
set load-balancing wan interface-health eth1 test 10 target '8.8.8.8'
set load-balancing wan interface-health eth1 test 10 ttl-limit '1'
set load-balancing wan interface-health eth1 test 10 type 'ttl'
set load-balancing wan interface-health eth1 test 20 resp-time '5'
set load-balancing wan interface-health eth1 test 20 target '8.8.4.4'
set load-balancing wan interface-health eth1 test 20 ttl-limit '1'
set load-balancing wan interface-health eth1 test 20 type 'ping'
set load-balancing wan interface-health eth2 failure-count '5'
set load-balancing wan interface-health eth2 nexthop 'dhcp'
set load-balancing wan interface-health eth2 success-count '1'
set load-balancing wan interface-health eth2 test 10 resp-time '5'
set load-balancing wan interface-health eth2 test 10 target '9.9.9.9'
set load-balancing wan interface-health eth2 test 10 ttl-limit '1'
set load-balancing wan interface-health eth2 test 10 type 'ttl'
set load-balancing wan interface-health eth2 test 20 resp-time '5'
set load-balancing wan interface-health eth2 test 20 target '149.112.112.112'
set load-balancing wan interface-health eth2 test 20 ttl-limit '1'
set load-balancing wan interface-health eth2 test 20 type 'ping'
set load-balancing wan rule 1000 description 'DEFAULT FAILOVER RULE'
set load-balancing wan rule 1000 failover
set load-balancing wan rule 1000 inbound-interface 'eth3'
set load-balancing wan rule 1000 interface eth0 weight '3'
set load-balancing wan rule 1000 interface eth1 weight '2'
set load-balancing wan rule 1000 interface eth2 weight '1'
set load-balancing wan rule 1000 protocol 'all'
set load-balancing wan sticky-connections inbound
set protocols static route 1.0.0.1/32 next-hop 172.16.0.1
set protocols static route 1.1.1.1/32 next-hop 172.16.0.1
set protocols static route 8.8.4.4/32 next-hop 192.168.122.1
set protocols static route 8.8.8.8/32 next-hop 192.168.122.1
set protocols static route 9.9.9.9/32 next-hop 172.16.2.1
set protocols static route 149.112.112.112/32 next-hop 172.16.2.1

The result of the command wan-load-balance status:

Chain WANLOADBALANCE_PRE (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ISP_eth0   all  --  eth3   *       0.0.0.0/0            0.0.0.0/0            state NEW
    0     0 CONNMARK   all  --  eth3   *       0.0.0.0/0            0.0.0.0/0            CONNMARK restore
[edit]

vyos@vyos# sudo ip rule show
0:	from all lookup local
32763:	from all fwmark 0xcb lookup 203
32764:	from all fwmark 0xca lookup 202
32765:	from all fwmark 0xc9 lookup 201
32766:	from all lookup main
32767:	from all lookup default
[edit]

The result of the command ip route:

vyos@vyos# sudo ip r show
default nhid 30 proto static metric 20 
	nexthop via 172.16.0.1 dev eth0 weight 1 
	nexthop via 172.16.2.1 dev eth2 weight 1 
	nexthop via 192.168.122.1 dev eth1 weight 1 
1.0.0.1 nhid 35 via 172.16.0.1 dev eth0 proto static metric 20 
1.1.1.1 nhid 35 via 172.16.0.1 dev eth0 proto static metric 20 
8.8.4.4 nhid 37 via 192.168.122.1 dev eth1 proto static metric 20 
8.8.8.8 nhid 37 via 192.168.122.1 dev eth1 proto static metric 20 
9.9.9.9 nhid 36 via 172.16.2.1 dev eth2 proto static metric 20 
149.112.112.112 nhid 36 via 172.16.2.1 dev eth2 proto static metric 20 
172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.10 
172.16.2.0/24 dev eth2 proto kernel scope link src 172.16.2.10 
192.168.122.0/24 dev eth1 proto kernel scope link src 192.168.122.35 
192.168.250.0/24 dev eth3 proto kernel scope link src 192.168.250.1 

vyos@vyos# sudo ip r show table 201
Error: ipv4: FIB table does not exist.
Dump terminated
[edit]
vyos@vyos# sudo ip r show table 202
Error: ipv4: FIB table does not exist.
Dump terminated
[edit]
vyos@vyos# sudo ip r show table 203
Error: ipv4: FIB table does not exist.
Dump terminated
[edit]
vyos@vyos#

Routing tables 201, 202 and 203 are not created.

Details

Difficulty level
Unknown (require assessment)
Version
VyOS 1.4-rolling-202204130521
Why the issue appeared?
Will be filled on close
Is it a breaking change?
Unspecified (possibly destroys the router)
Issue type
Bug (incorrect behavior)

Event Timeline

This comment was removed by Viacheslav.

Confirming the same.

me@gw01# sudo ip rule
0:	from all lookup local
98:	from all fwmark 0xcc lookup 204
99:	from all fwmark 0xcb lookup 203
100:	from all fwmark 0xca lookup 202
101:	from all fwmark 0xc9 lookup 201
102:	from all fwmark 0x7fffff99 lookup 102
114:	from all fwmark 0x7fffff8d lookup 114
32766:	from all lookup main
32767:	from all lookup default
me@gw01# sudo ip r show table 202
Error: ipv4: FIB table does not exist.
Dump terminated
[edit]
me@gw01# sudo ip r show table 203
Error: ipv4: FIB table does not exist.
Dump terminated
[edit]
me@gw01# sudo ip r show table 204
Error: ipv4: FIB table does not exist.
Dump terminated

If it helps, I am also getting the exact same errors and problems, would love to see this working please

I can't reproduce it
With such configuration all works fine VyOS 1.4-rolling-202204300743:

set load-balancing wan interface-health eth4 failure-count '5'
set load-balancing wan interface-health eth4 nexthop 'dhcp'
set load-balancing wan interface-health eth4 success-count '1'
set load-balancing wan interface-health eth4 test 10 target '192.0.2.40'
set load-balancing wan interface-health eth5 failure-count '5'
set load-balancing wan interface-health eth5 nexthop 'dhcp'
set load-balancing wan interface-health eth5 success-count '1'
set load-balancing wan interface-health eth5 test 10 target '192.0.2.50'
set load-balancing wan interface-health eth6 failure-count '5'
set load-balancing wan interface-health eth6 nexthop 'dhcp'
set load-balancing wan interface-health eth6 success-count '1'
set load-balancing wan interface-health eth6 test 10 target '192.0.2.60'
set load-balancing wan rule 10 failover
set load-balancing wan rule 10 inbound-interface 'eth7'
set load-balancing wan rule 10 interface eth4
set load-balancing wan rule 10 interface eth5
set load-balancing wan rule 10 interface eth6
set load-balancing wan rule 10 protocol 'all'
set load-balancing wan sticky-connections

Check:

vyos@tstrtr2:~$ sudo ip route show table 201
default via 100.64.4.1 dev eth4 
vyos@tstrtr2:~$ 
vyos@tstrtr2:~$ sudo ip route show table 202
default via 100.64.5.1 dev eth5 
vyos@tstrtr2:~$ 
vyos@tstrtr2:~$ sudo ip route show table 203
default via 100.64.6.1 dev eth6 
vyos@tstrtr2:~$ 
vyos@tstrtr2:~$ show wan-load-balance 
Interface:  eth4
  Status:  active
  Last Status Change:  Wed May  4 10:32:59 2022
  +Test:  ping  Target: 192.0.2.40
    Last Interface Success:  0s 
    Last Interface Failure:  2m0s       
    # Interface Failure(s):  0

Interface:  eth5
  Status:  active
  Last Status Change:  Wed May  4 10:32:59 2022
  +Test:  ping  Target: 192.0.2.50
    Last Interface Success:  0s 
    Last Interface Failure:  2m0s       
    # Interface Failure(s):  0

Interface:  eth6
  Status:  active
  Last Status Change:  Wed May  4 10:32:59 2022
  +Test:  ping  Target: 192.0.2.60
    Last Interface Success:  0s 
    Last Interface Failure:  2m0s       
    # Interface Failure(s):  0

vyos@tstrtr2:~$

I was able to reproduce issue on latest VyOS 1.4-rolling-202205060217
Steps to reproduce:
1 - Fresh/clean vyos router
2 - Add interface configuration (dhcp on WANs and static IP addresses on LAN side), commit and save
3 - Add next WLB configuration:

vyos@WLB:~$ show config comm | grep wan
set load-balancing wan interface-health eth0 failure-count '5'
set load-balancing wan interface-health eth0 nexthop 'dhcp'
set load-balancing wan interface-health eth0 success-count '1'
set load-balancing wan interface-health eth0 test 10 resp-time '5'
set load-balancing wan interface-health eth0 test 10 target '1.1.1.1'
set load-balancing wan interface-health eth0 test 10 ttl-limit '1'
set load-balancing wan interface-health eth1 failure-count '5'
set load-balancing wan interface-health eth1 nexthop 'dhcp'
set load-balancing wan interface-health eth1 success-count '1'
set load-balancing wan interface-health eth1 test 10 resp-time '5'
set load-balancing wan interface-health eth1 test 10 target '8.8.8.8'
set load-balancing wan interface-health eth1 test 10 ttl-limit '1'
set load-balancing wan rule 10 failover
set load-balancing wan rule 10 inbound-interface 'eth3.25'
set load-balancing wan rule 10 interface eth0 weight '50'
set load-balancing wan rule 10 interface eth1 weight '100'
set load-balancing wan rule 10 protocol 'all'
set load-balancing wan rule 20 failover
set load-balancing wan rule 20 inbound-interface 'eth3.35'
set load-balancing wan rule 20 interface eth0 weight '100'
set load-balancing wan rule 20 interface eth1 weight '50'
set load-balancing wan rule 20 protocol 'all'

4 - commit;save
5 - Verify routing tables, save configuration and reboot:

vyos@WLB# sudo ip route show table 201
default via 198.51.100.1 dev eth0 
[edit]
vyos@WLB# sudo ip route show table 202
default via 192.0.2.1 dev eth1 
[edit]
vyos@WLB# save
Saving configuration to '/config/config.boot'...
Done
[edit]
vyos@WLB# run reboot now

6 - After reboot, verify WLB and routing tables:

vyos@WLB:~$ show wan-load-balance 
Interface:  eth0
  Status:  active
  Last Status Change:  Fri May  6 17:47:58 2022
  +Test:  ping  Target: 1.1.1.1
    Last Interface Success:  0s 
    Last Interface Failure:  n/a                
    # Interface Failure(s):  0

Interface:  eth1
  Status:  active
  Last Status Change:  Fri May  6 17:47:58 2022
  -Test:  ping  Target: 8.8.8.8
    Last Interface Success:  n/a                
    Last Interface Failure:  0s 
    # Interface Failure(s):  1

vyos@WLB:~$ sudo ip route show table 201
Error: ipv4: FIB table does not exist.
Dump terminated
vyos@WLB:~$ sudo ip route show table 202
Error: ipv4: FIB table does not exist.
Dump terminated
vyos@WLB:~$ 
vyos@WLB:~$ 
vyos@WLB:~$ sudo ip rule
0:      from all lookup local
32764:  from all fwmark 0xca lookup 202
32765:  from all fwmark 0xc9 lookup 201
32766:  from all lookup main
32767:  from all lookup default

Some debug info:

iptables-nft -t raw -L PREROUTING

vyos@r14# sudo iptables-nft -t raw -L PREROUTING
iptables v1.8.7 (nf_tables): chain `PREROUTING' in table `raw' is incompatible, use 'nft' tool.

https://github.com/vyos/vyatta-wanloadbalance/blob/a831f22d4c34bf947b0335e55573280b75c2bde0/src/lbdecision.cc#L825

vyos@r14# sudo /opt/vyatta/sbin/wan_lb -v -f /var/run/load-balance/wlb.conf  -i /var/run/vyatta/wlb.pid
LBDataFactory::process(1): health:
LBDataFactory::process(2): interface:eth0
LBDataFactory::process(2): failure-ct:2
LBDataFactory::process(2): success-ct:1
LBDataFactory::process(2): nexthop:dhcp
LBDataFactory::process(3): rule:10
LBDataFactory::process(4): type:ping
LBTest::init()
send raw sock: 5
LBDataFactory::process(4): target:1.1.1.1
LBDataFactory::process(4): resp-time:5000
LBDataFactory::process(3): :
LBDataFactory::process(2): :
LBDataFactory::process(1): :
LBDataFactory::process(2): interface:eth1
LBDataFactory::process(2): failure-ct:2
LBDataFactory::process(2): success-ct:1
LBDataFactory::process(2): nexthop:dhcp
LBDataFactory::process(3): rule:10
LBDataFactory::process(4): type:ping
LBDataFactory::process(4): target:8.8.8.8
LBDataFactory::process(4): resp-time:5000
LBDataFactory::process(3): :
LBDataFactory::process(2): :
LBDataFactory::process(1): :
LBDataFactory::process(0): :
health:
  interface: eth0
    nexthop: dhcp
    success ct: 1
    failure ct: 2
    test: 10
      target: 1.1.1.1, resp_time: 5000
  interface: eth1
    nexthop: dhcp
    success ct: 1
    failure ct: 2
    test: 10
      target: 8.8.8.8, resp_time: 5000

wan:
end dump
STARTING CYCLE
LBDecision::execute(): applying command to system: iptables-nft -t nat -N WANLOADBALANCE
iptables: Chain already exists.
LBDecision::execute(): applying command to system: iptables-nft -t nat -F WANLOADBALANCE
LBDecision::execute(): applying command to system: iptables-nft -t nat -D VYOS_PRE_SNAT_HOOK -j WANLOADBALANCE
iptables: Bad rule (does a matching rule exist in that chain?).
LBDecision::execute(): applying command to system: iptables-nft -t nat -I VYOS_PRE_SNAT_HOOK 1 -j WANLOADBALANCE
LBDecision::execute(): applying command to system: iptables-nft -t raw -N WLB_CONNTRACK
LBDecision::execute(): applying command to system: iptables-nft -t raw -F WLB_CONNTRACK
LBDecision::execute(): applying command to system: iptables-nft -t raw -A WLB_CONNTRACK -j ACCEPT
LBDecision::execute(): applying command to system: iptables-nft -t raw -D PREROUTING -j WLB_CONNTRACK
iptables: Bad rule (does a matching rule exist in that chain?).
LBDecision::execute(): applying command to system: iptables-nft -t raw -L PREROUTING
iptables v1.8.7 (nf_tables): chain `PREROUTING' in table `raw' is incompatible, use 'nft' tool.

LBDecision::execute(): applying command to system: iptables-nft -t raw -I PREROUTING 2 -j WLB_CONNTRACK
LBDecision::execute(): applying command to system: iptables-nft -t mangle -N WANLOADBALANCE_PRE
LBDecision::execute(): applying command to system: iptables-nft -t mangle -F WANLOADBALANCE_PRE
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A WANLOADBALANCE_PRE -j ACCEPT
LBDecision::execute(): applying command to system: iptables-nft -t mangle -D PREROUTING -j WANLOADBALANCE_PRE
iptables: Bad rule (does a matching rule exist in that chain?).
LBDecision::execute(): applying command to system: iptables-nft -t mangle -I PREROUTING 1 -j WANLOADBALANCE_PRE
LBDecision::execute(): applying command to system: iptables-nft -t mangle -N ISP_eth0
LBDecision::execute(): applying command to system: iptables-nft -t mangle -F ISP_eth0
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth0 -j CONNMARK --set-mark 201
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth0 -j MARK --set-mark 201
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth0 -j ACCEPT
LBDecision::execute(): applying command to system: ip rule delete table 201
RTNETLINK answers: No such file or directory
LBDecision::execute(): applying command to system: ip rule add fwmark 0XC9 table 201
LBDecision::execute(): applying command to system: iptables-nft -t nat -A WANLOADBALANCE -m connmark --mark 201 -j SNAT --to-source 192.168.122.190
LBDecision::execute(): applying command to system: iptables-nft -t mangle -N ISP_eth1
LBDecision::execute(): applying command to system: iptables-nft -t mangle -F ISP_eth1
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth1 -j CONNMARK --set-mark 202
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth1 -j MARK --set-mark 202
LBDecision::execute(): applying command to system: iptables-nft -t mangle -A ISP_eth1 -j ACCEPT
LBDecision::execute(): applying command to system: ip rule delete table 202
RTNETLINK answers: No such file or directory
LBDecision::execute(): applying command to system: ip rule add fwmark 0XCA table 202
LBDecision::execute(): applying command to system: iptables-nft -t nat -A WANLOADBALANCE -m connmark --mark 202 -j SNAT --to-source 192.168.100.199
LBDecision::execute(): applying command to system: ip route flush cache
main.cc: starting new cycle
LBPathTest::start(): init
LBPathTest::start(): sending 2 tests
ICMPEngine::start(): sending ping test for: eth1 for 8.8.8.8 id: 1
ICMPEngine::send(): sendto: 40, packet id: 1
ICMPEngine::start(): sending ping test for: eth0 for 1.1.1.1 id: 2
ICMPEngine::send(): sendto: 40, packet id: 2
LBPathTest::start(): waiting on recv
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 431
LBTest::recv(): 431
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 432
LBTest::recv(): 432
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 1
LBTest::recv(): 1
LBTest::recv(): usecs: 20761, secs: 0
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 2
LBTest::recv(): 2
LBTest::recv(): usecs: 37602, secs: 0
LBTest::recv(): finished heath test
LBTest::recv(): success for eth1 : 20
LBPathTest::start() interface: eth1 response value: 20
LBTest::recv(): success for eth0 : 37
LBPathTest::start() interface: eth0 response value: 37
LBDecision::run(), starting decision
LBDecision::run(), state changed, applying new rule set
LBDecision::execute(): applying command to system: iptables-nft -t mangle -F WANLOADBALANCE_PRE
eth0 true 0 1653560882 
eth1 true 0 1653560882 




main.cc: starting new cycle
LBPathTest::start(): init
LBPathTest::start(): sending 2 tests
ICMPEngine::start(): sending ping test for: eth1 for 8.8.8.8 id: 3
ICMPEngine::send(): sendto: 40, packet id: 3
ICMPEngine::start(): sending ping test for: eth0 for 1.1.1.1 id: 4
ICMPEngine::send(): sendto: 40, packet id: 4
LBPathTest::start(): waiting on recv
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 433
LBTest::recv(): 433
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 434
LBTest::recv(): 434
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 3
LBTest::recv(): 3
LBTest::recv(): usecs: 20605, secs: 0
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 4
LBTest::recv(): 4
LBTest::recv(): usecs: 37410, secs: 0
LBTest::recv(): finished heath test
LBTest::recv(): success for eth1 : 20
LBPathTest::start() interface: eth1 response value: 20
LBTest::recv(): success for eth0 : 37
LBPathTest::start() interface: eth0 response value: 37
LBDecision::run(), starting decision
LBDecision::run(), state changed, applying new rule set
LBDecision::execute(): applying command to system: ip route show table 201
LBDecision::execute(): applying command to system: ip route show table 202
eth0 true 0 1653560887 
eth1 true 0 1653560887 




main.cc: starting new cycle
LBPathTest::start(): init
LBPathTest::start(): sending 2 tests
ICMPEngine::start(): sending ping test for: eth1 for 8.8.8.8 id: 5
ICMPEngine::send(): sendto: 40, packet id: 5
ICMPEngine::start(): sending ping test for: eth0 for 1.1.1.1 id: 6
ICMPEngine::send(): sendto: 40, packet id: 6
LBPathTest::start(): waiting on recv
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 435
LBTest::recv(): 435
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 436
LBTest::recv(): 436
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 5
LBTest::recv(): 5
LBTest::recv(): usecs: 20568, secs: 0
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 6
LBTest::recv(): 6
LBTest::recv(): usecs: 37371, secs: 0
LBTest::recv(): finished heath test
LBTest::recv(): success for eth1 : 20
LBPathTest::start() interface: eth1 response value: 20
LBTest::recv(): success for eth0 : 37
LBPathTest::start() interface: eth0 response value: 37
LBDecision::run(), starting decision
LBDecision::run(), state changed, applying new rule set
LBDecision::execute(): applying command to system: ip route show table 201
LBDecision::execute(): applying command to system: ip route show table 202
eth0 true 0 1653560892 
eth1 true 0 1653560892

Expected output:
ip route replace table xxx default dev ethX via x.x.x.x

main.cc: starting new cycle
LBPathTest::start(): init
LBPathTest::start(): sending 2 tests
ICMPEngine::start(): sending ping test for: eth1 for 8.8.8.8 id: 3
ICMPEngine::send(): sendto: 40, packet id: 3
ICMPEngine::start(): sending ping test for: eth0 for 1.1.1.1 id: 4
ICMPEngine::send(): sendto: 40, packet id: 4
LBPathTest::start(): waiting on recv
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 3
LBTest::recv(): 3
LBTest::recv(): usecs: 20545, secs: 0
LBTest::receive(): start
LBTest::receive() received: 56
LBTest::receive(): 4
LBTest::recv(): 4
LBTest::recv(): usecs: 37608, secs: 0
LBTest::recv(): finished heath test
LBTest::recv(): success for eth1 : 20
LBPathTest::start() interface: eth1 response value: 20
LBTest::recv(): success for eth0 : 37
LBPathTest::start() interface: eth0 response value: 37
LBDecision::run(), starting decision
LBDecision::run(), state changed, applying new rule set
LBDecision::execute(): applying command to system: ip route show table 201
LBDecision::execute(): applying command to system: ip route replace table 201 default dev eth0 via 192.168.122.1
LBDecision::execute(): applying command to system: ip route show table 202
LBDecision::execute(): applying command to system: ip route replace table 202 default dev eth1 via 192.168.100.1
eth0 true 0 1653580455 
eth1 true 0 1653580455

load-balancing wan completely broken with nexthop dhcp for 1.4 (it happens after first reboot or renew)
The script gets empty values there https://github.com/vyos/vyatta-wanloadbalance/blob/a831f22d4c34bf947b0335e55573280b75c2bde0/src/lbdecision.cc#L180
So ip route replace table is never executed
Why does it get an empty value?
It parse lease file https://github.com/vyos/vyatta-wanloadbalance/blob/a831f22d4c34bf947b0335e55573280b75c2bde0/src/lbdata.cc#L335-L341
option new_routers and in 1.4 the file looks as

vyos@r14:~$ cat /var/lib/dhcp/dhclient_peth1.lease
Fri Jun 17 16:19:43 EEST 2022
reason='PREINIT'
interface='peth1'
new_expiry=''
new_dhcp_lease_time=''
medium=''
alias_ip_address=''
new_ip_address=''
new_broadcast_address=''
new_subnet_mask=''
new_domain_name=''
new_network_number=''
new_domain_name_servers=''
new_routers=''
new_static_routes=''
new_dhcp_server_identifier=''
new_dhcp_message_type=''
old_ip_address=''
old_subnet_mask=''
old_domain_name=''
old_domain_name_servers=''
old_routers=''
old_static_routes=''
vyos@r14:~$

We expect there some values, like

vyos@r1:~$ cat /var/lib/dhcp/dhclient_eth0.lease
Fri Jun 17 16:19:41 EEST 2022
reason='BOUND'
interface='eth0'
new_expiry='1655475580'
new_dhcp_lease_time='3600'
medium=''
alias_ip_address=''
new_ip_address='192.168.122.104'
new_broadcast_address='192.168.122.255'
new_subnet_mask='255.255.255.0'
new_domain_name=''
new_network_number='192.168.122.0'
new_domain_name_servers='192.168.122.1'
new_routers='192.168.122.1'
new_static_routes=''
new_dhcp_server_identifier='192.168.122.1'
new_dhcp_message_type='5'
old_ip_address='192.168.122.104'
old_subnet_mask='255.255.255.0'
old_domain_name=''
old_domain_name_servers='192.168.122.1'
old_routers='192.168.122.1'
old_static_routes=''
vyos@r1:~$

For the same reason we don't see address from op-mode
Because it parses the same file

vyos@r14:~$ show dhcp client leases interface eth1
interface  : eth1
last update: Fri Jun 17 19:03:22 EEST 2022
reason     : PREINIT

vyos@r14:~$

If we find who brokes this lease file we will fix it

I hope it can be found. I have been banging my head against the wall with this issue :( and it's hurting.

Thanks also. Great work!

@Viacheslav I may be on to something. It's related to the order of execution of the DHCP client exit hook scripts in /etc/dhcp/dhclient-exit-hooks.d.

If I rename /etc/dhcp/dhclient-exit-hooks.d/vyatta-dhclient-hook to /etc/dhcp/dhclient-exit-hooks.d/98-vyatta-dhclient-hook I get a perfect /var/lib/dhcp/dhclient_eth0.lease file:

Mon Mar 13 16:07:18 CET 2023
reason='BOUND'
interface='eth0'
new_expiry='1679323536'
new_dhcp_lease_time='603499'
medium=''
alias_ip_address=''
new_ip_address='84.107.140.244'
new_broadcast_address='255.255.255.255'
new_subnet_mask='255.255.255.0'
new_domain_name='dynamic.ziggo.nl'
new_network_number='84.107.140.0'
new_domain_name_servers='84.116.46.21 84.116.46.20'
new_routers='84.107.140.1'
new_static_routes=''
new_dhcp_server_identifier='10.254.216.1'
new_dhcp_message_type='5'
old_ip_address='84.107.140.244'
old_subnet_mask='255.255.255.0'
old_domain_name='dynamic.ziggo.nl'
old_domain_name_servers='84.116.46.21 84.116.46.20'
old_routers='84.107.140.1'
old_static_routes=''

Then, the route is nicely added again:

» show ip route table 201
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

VRF default table 201:
K>* 0.0.0.0/0 [0/0] via 84.107.140.1, eth0, 00:06:24

Could it be that any of the other hook scripts is overwriting the contents of the .lease file?

@marc_s Try to remove /etc/dhcp/dhclient-exit-hooks.d/ipsec-dhclient-hook it could be bug due to T4856

@Viacheslav Confirmed, that is the culprit.
To be precise: I deleted ipsec-dhclient-hook and renamed 98-vyatta-dhclient-hook back to vyatta-dhclient-hook. Then I ran a renew dhcp interface eth0 and I got a correct .lease file.
Even when the IPSec script is fixed, it might be wise to prepend all scripts in /etc/dhcp/dhclient-exit-hooks.d with a number to enforce script order execution, just like in /etc/dhcp/dhclient-enter-hooks.d.

Viacheslav changed the task status from Open to Needs testing.Apr 3 2023, 3:46 PM

@marc_s Will be fixed in the next rolling release, could you check?

Viacheslav claimed this task.
Viacheslav moved this task from Need Triage to Finished on the VyOS 1.4 Sagitta board.

@marc_s Will be fixed in the next rolling release, could you check?

Will test ASAP, next week I have a maintenance window, will let you know.

one issue.
the migration scripts don't take into account older load balancing configs.

if the test > rule > type > ping isn't explicitly set then the rule defaults to the next hop address and ignores the rule entirely.
the default rule seems to be the next hop address for the interface.

one issue.
the migration scripts don't take into account older load balancing configs.

if the test > rule > type > ping isn't explicitly set then the rule defaults to the next hop address and ignores the rule entirely.
the default rule seems to be the next hop address for the interface.

Should be fixed in https://github.com/vyos/vyos-1x/pull/1998 T5171