Page MenuHomeVyOS Platform

new style xml and conf-mode scripts: posibillity to add tagNode value as parameter to conf-script
Closed, ResolvedPublicFEATURE REQUEST

Description

When running conf-mode scripts for multiple tagNodes the script is executed one time for every tagNode with changes applied.

Currently that results in the same configure scripte running twice or more times if there is changes on multiple tagNodes.
Because the tagNode value field is not propagated into the script it is not possible to detect what tagNode its running for, and then the script need to generate data for all instances of the tagNode.
This has the effect of calculating the data thing multiple times and possible reloading config daemons multiple times on each commit.

This is the case on wireguard,bacst_relay,cron, and other scripts thats executed by tagNodes.

In old time vyatta syntax you use $VAR(@) to get the tagNode value as a input. that stil works with the following syntax:

<tagNode name="example" owner="${vyos_conf_scripts_dir}/example.py $VAR(@)"> -- accessed by sys.argv[1] or

<tagNode name="example" owner="VALUE='$VAR(@)' ${vyos_conf_scripts_dir}/example.py"> . -- accessed by os.environ["VALUE"]

in this example the value will be available on sys.argv[1], but it will only work on the old vyatta config engine.

Would it be possible to add this as a parameter by the vyatta-template-generator or as a env.variable thats shipped into all tagNode scripts?

Details

Difficulty level
Hard (possibly days)
Version
-
Why the issue appeared?
Will be filled on close
Is it a breaking change?
Perfectly compatible
Issue type
Internal change (not visible to end users)

Event Timeline

Hi @runar,

I'm not sure if I did understand the issue correctly, however I don't think it's a good idea. Tag nodes can be nested and you need to figure out if a change happened anyway, so the script runs only once anyway.
Since you mention wireguard, let take me that as an example.

set int wireguard <interfacename> = interface name is a tagnode, so you can have multiple one of course and they have all a specific config, basically you can see it like a config container. So you gotta loop through them anyway and check their entities if it's an update, if it's new you go directly into verify() within your script.
Within interface, you can have multiple peers and they have their own config space (allowed-ips for instance).
If allowed-ips are added or remove, I have then in an array withing the config dictionary, so it is being applied in a loop within the script

So, if you create a new new interface, the script itself runs only once, regardless if its an update or a new create.

At least, if I haven't overseen anything, if so please let me know which lines.

Hi @hagbard,

The current implementation of the config interpretor does not work that way.
It is correct that your config script needs to take account of all added/removed config within your tagNode, but the script will actually run once for every tagNode instance you define.
let me take an easy example:

<?xml version="1.0"?>
<interfaceDefinition>
  <node name="interfaces">
    <children>
      <tagNode name="example" owner="${vyos_conf_scripts_dir}/example.sh">
        <properties>
        </properties>
        <children>
          <leafNode name="description">
            <properties>
            </properties>
          </leafNode>
        </children>
      </tagNode>
    </children>
  </node>
</interfaceDefinition>

This code makes a tagNode interface named example with a single leaf node inside (description). so far so good. :)
i also made a config script to run on this . for this example its a single shell script:

#!/bin/bash
echo "Running config script for: $VALUE"

Here i have applied my patch in advance to use as a demonstration. besides that, the only thing that my patch does is to forward the parameter onto the env. variable inputed into the script. i have not done anything about the order the scripts are run in.

to show my example ive created three of these interfaces:

[edit]
vyos@vyos1# set interfaces example ex1 description test1
[edit]
vyos@vyos1# set interfaces example ex2 description test2
[edit]
vyos@vyos1# set interfaces example ex3 description test3

if what you say is true i would expect to se "Running config scriot for: " once and without any parameter forwarded when doing the commit, but:

vyos@vyos1# commit
[ interfaces example ex1 ]
Running config script for: ex1

[ interfaces example ex2 ]
Running config script for: ex2

[ interfaces example ex3 ]
Running config script for: ex3

[edit]
vyos@vyos1#

As you can se, the configuration script is executed tree times, once for every tagNode object that is in my configuration.
This true when creating the tagNode object , on edit of data inside the tagNode the config interpretor knows witch tagNode thats changed and only executes the script on tagNodes that actually has changed.

vyos@vyos1# set interface example ex2 description i_changed_this
[edit]
vyos@vyos1# commit
[ interfaces example ex2 ]
Running config script for: ex2

[edit]
vyos@vyos1#

the same is also true on tagNode deletion:

vyos@vyos1# delete interface example ex2
[edit]
vyos@vyos1# commit
[ interfaces example ex2 ]
Running config script for: ex2

[edit]
vyos@vyos1#

as you can see in this example the config script is executed not for every tagNode, but for every instance of that tagNode :)
and thats why we need to get hold of the tagNode "value" that is currently executing.
you could try to add a print statement to the wireguard and create some wireguard interfaced and se how many times its printed before returning to the shell.

I dont know if the new config interpretor is doing things this way, but i know the old one does :)

on the other side i completely agree with you.. but i also think that if the new config interpretor is going to change this behavior it has to allow both variants.
This have a bit to do with how the application you are integrating is configured.
On applications that uses a single instance/config for all instances created it would be wise to do this in a single run. (f.ex lldp and wireguard?)
But on applications that have one instance and separate config for every instance you create i think it would be better to have one script that executes for every instance you create.

--Runar

To do the same example as it is running in the current-rolling devel i have reverted my patch:

[edit]
vyos@vyos1# set interface example ex1 description test1
[edit]
vyos@vyos1# set interface example ex2 description test2
[edit]
vyos@vyos1# set interface example ex3 description test3
[edit]
vyos@vyos1# commit
[ interfaces example ex1 ]
Running config script for:

[ interfaces example ex2 ]
Running config script for:

[ interfaces example ex3 ]
Running config script for:

[edit]
vyos@vyos1#

I here have created tree instances of an example interface. and the script is executed once for every interface nodeTag instance created..
i then changed two of them:

[edit]
vyos@vyos1# set interface example ex2 description i_changed_this
[edit]
vyos@vyos1# set interface example ex3 description i_changed_this
[edit]
vyos@vyos1# commit
[ interfaces example ex2 ]
Running config script for:

[ interfaces example ex3 ]
Running config script for:

[edit]
vyos@vyos1#

then, the same again.. we then se the script running twice, but we cant figure out what interface it is executing for right now, so wee need to do all checks on all of them every time the script fires.

gotcha, now I see the fnords.

I had the exact same problem for the service broadcast-relay code. I did a cleanup on the youthful folly. Resulting in this issue going away (https://github.com/vyos/vyos-1x/commit/fd1eabe72862ec364643a61cb94b21c330a385f5#diff-e27a1d5bf8f371aae9da8bbd36e674d0).

I moved the call of the python script one node up - which is the way it should have been done from the beginning

syncer triaged this task as Normal priority.Sep 1 2018, 2:47 PM

Yes, in some situations this is resolvable eg in the service broadcast-relay example. Here the owner parameter could be moved to the "top-node" for that block. the problem with interfaces is that every config block is a tagNode, so we can't do that trick without moving it to the interfaces node that catches all interfaces., and not just interfaces of the type you want.

I've been looking into how this is implemented in all instances of interfaces/* and everyone uses the same run on every tag value instance approach.
Here are a couple of examples of easy implementations looked from node.def
openvpn:
sudo /opt/vyatta/sbin/vyatta-update-ovpn.pl "$VAR(@)"

vti:
${vyatta_sbindir}/vyatta-vti-config.pl --checkref --intf=$VAR(@)`

Having a look at the vbash code: https://github.com/vyos/vyatta-cfg/blob/current/src/commit/commit-algorithm.cpp#L234
/* note: commit traversal doesn't include "tag node", only "tag values".
Running things on a whole tag node is intentionally left out of the picture, there is for now no way we can run things on a whole tag node.
That is inline with how everything is implemented so far in the conf definitions.

Don't get me wrong and i dont say that we try to do things the wrong way, but for now (as i can find out) it is not possible to implement the run once for entire tag filosofi that is trying to be implemented in f.ex wireguard as the owner tag needs to be on the tag node. The only way to fix this is to add another node that owns that config block one layer above the tag node, but that will again screw up things syntax wise....
To implement this feature we need to do major things to the vbash code.
If that is not implemented i think we should stick with executing tags as "execute once for each tag value" that is actually what vbash is doing for us, and to get that to work we needs to get the tag value exported to the script in some way so we know what value we are working on.

in the case of wireguard, when you create 4 wireguard interfaces now, the python script behind wireguard generation is executed 4 times, one time for each tag value (wg# interface) and because there is no way to get the current running tag value in to the script there is no way to create a script that only runs on one interface and then you get multiple generations of the same configuration.. and that is not intended behavior.

What should we do in this case? my vote is leaning against doing things as intended at this point in time and add support for "run only once for tag" at a later point eg. when the new interpretor is completed.

@hagbard @dmbaturin @c-po @syncer ? what do you think? just add more people if i've left someone that might have a comment on this topic out..

as noted on slack:
A way to implement the run once for tag :
If we in the tag after first execution add a temp file 'touch /tmp/complete-blah' , then we check for existance on that file on every run and skip of it exists..
in eg. wireguard/node.def:

end: if [ ! -f /tmp/runonce-wireguard.lock ]; then
         sudo sh -c "${vyos_conf_scripts_dir}/wireguard.py"
         touch /tmp/runonce-wireguard.lock
     fi

Whis way the wireguard.py shuld only execute on the first "execution" and be skipped on all recurring runs.

we also need a job that delete all runonce-*.lock files that is executed at the end of every commit,

To make it active it needs to be implemented as a parameter in the XML syntax on tag nodes that creates the if -f/touch structure in the final node.def file.

That said, i still want an option to do this the way vbash wants it to be done, with the tag value as a parameter

i think it would work, but haven't tested it

This looks like an ultimate hackaround. Maybe we should check if we can change the C implementation

Its a little hack, but not the ultimate one i think :p temporary files for storing state is used quite a few times inn the original bash/perl scripts

This works as expected when I set new parameters inside a node - unfortunately on a delete it does not work properly - I feel the script is not called at all.

Found this while working on T1556

On further investigation it looks like this happens only when deleting instances of a multi: node.

Nevermind - The problem was my problem only.

I used the following:

<?xml version="1.0"?>
<interfaceDefinition>
  <node name="interfaces">
    <children>
      <tagNode name="bridge" owner="${vyos_conf_scripts_dir}/interface-bridge.py">
        <properties>
          <help>Bridge interface name</help>
          <priority>310</priority>
          <constraint>
            <regex>^br[0-9]+$</regex>
          </constraint>
          <constraintErrorMessage>Bridge interface must be named brN</constraintErrorMessage>
          <valueHelp>
            <format>brN</format>
            <description>Bridge interface name</description>
          </valueHelp>
        </properties>
        <children>
          <leafNode name="address">
            <properties>
              <help>IP address</help>
              <priority>320</priority>
              <valueHelp>
                <format>ipv4net</format>
                <description>IPv4 address and prefix length</description>
              </valueHelp>
              <valueHelp>
                <format>ipv6net</format>
                <description>IPv6 address and prefix length</description>
              </valueHelp>
              <valueHelp>
                <format>dhcp</format>
                <description>Dynamic Host Configuration Protocol</description>
              </valueHelp>
              <valueHelp>
                <format>dhcpv6</format>
                <description>Dynamic Host Configuration Protocol for IPv6</description>
              </valueHelp>
              <constraint>
                <validator name="cidr"/>
                <regex>(dhcp|dhcpv6)</regex>
              </constraint>
              <multi/>
            </properties>
          </leafNode>
        </children>
      </tagNode>
    </children>
  </node>
</interfaceDefinition>

The problem was the additional <priority>320</priority> statement which I inherited from the old Vyatta implementation. - Sorry for the noise.

zsdc changed Difficulty level from Unknown (require assessment) to Hard (possibly days).Mar 11 2021, 5:40 PM
zsdc set Is it a breaking change? to Unspecified (possibly destroys the router).
c-po assigned this task to runar.
dmbaturin changed Is it a breaking change? from Unspecified (possibly destroys the router) to Perfectly compatible.Sep 3 2021, 12:13 PM
dmbaturin set Issue type to Internal change (not visible to end users).