Skip to content

Conversation

@vladimiroltean
Copy link
Contributor

We currently "rely" on Rob Herring's mail bot to provide sanity checking on patches that touch device tree bindings:
https://lore.kernel.org/netdev/[email protected]

and that is sufficient in "public" settings (patches sent to [email protected] and [email protected] also copied).

But in "private" settings (i.e. individual developers encouraged to do sanity checking on their own, for example by running ingest_mdir), we quickly find that Rob's mail bot is apparently closed source, and it has happened more than once (at least to me) for me to miss newly introduced dt_binding_check errors even if I did the due dilligence of running that test locally. So an automated check would be good to have.

The justification for including it in NIPA is that while device tree maintainers review binding patches, they get applied to the subsystem tree (in this case netdev).

Furthermore, I don't see the overlap with Rob Herring's mail bot as being a strong reason against such checks in NIPA, similar to how the existence of Intel's kbuild test robot does not preclude NIPA from having build tests.

In terms of implementation, "make dt_binding_check" does not always nicely print "error" or "warning" on the lines with issues. Furthermore, the errors are multi-line. So instead of filtering for error lines, here we filter out the "normal" lines, which contain things such as "make", "SCHEMA", "CHKDT", "LINT", "DTEX", "DTC" at the beginning.

We currently "rely" on Rob Herring's mail bot to provide sanity checking
on patches that touch device tree bindings:
https://lore.kernel.org/netdev/[email protected]

and that is sufficient in "public" settings (patches sent to
[email protected] and [email protected] also copied).

But in "private" settings (i.e. individual developers encouraged to do
sanity checking on their own, for example by running ingest_mdir), we
quickly find that Rob's mail bot is apparently closed source, and it has
happened more than once (at least to me) for me to miss newly introduced
dt_binding_check errors even if I did the due dilligence of running that
test locally. So an automated check would be good to have.

The justification for including it in NIPA is that while device tree
maintainers review binding patches, they get applied to the subsystem
tree (in this case netdev).

Furthermore, I don't see the overlap with Rob Herring's mail bot as
being a strong reason against such checks in NIPA, similar to how the
existence of Intel's kbuild test robot does not preclude NIPA from
having build tests.

In terms of implementation, "make dt_binding_check" does not always
nicely print "error" or "warning" on the lines with issues. Furthermore,
the errors are multi-line. So instead of filtering for error lines, here
we filter out the "normal" lines, which contain things such as "make",
"SCHEMA", "CHKDT", "LINT", "DTEX", "DTC" at the beginning.

Signed-off-by: Vladimir Oltean <[email protected]>
@vladimiroltean vladimiroltean marked this pull request as draft November 6, 2025 08:59
@vladimiroltean
Copy link
Contributor Author

I've marked this as "draft" because it doesn't run "pip3 install dtschema --upgrade" - I am using through ingest_mdir.py so that would translate into a system requirement. I'm not even sure where that would go - in my case I am updating dtschema prior to each build, due to the high frequency of changes.

@kuba-moo
Copy link
Contributor

kuba-moo commented Nov 6, 2025

I have no experience with checks which require some sort of external dependency to be updated. Can't think of a reason why pip3 install dtschema --upgrade as part of the test would be a deal-breaker, and we definitely don't want to have to update it manually in the CI. As you say there is no hook in NIPA itself for such updates.
I think we can take this in and see how it fares..

@vladimiroltean
Copy link
Contributor Author

Well, one reason against running pip3 install dtschema --upgrade as part of the test is that it's non-portable. For example in my use case I have a Debian Trixie wrapper around ingest_mdir.py, and I can't use pip3 system-wide, as the python packages are managed by the distribution. What I ended up doing is creating a venv per build where I run python3 -m pip install dtschema requests ply GitPython. The venv per build is done so that one concurrent build doesn't change the dtschema used by another, in the middle of the test.

@kuba-moo
Copy link
Contributor

kuba-moo commented Nov 6, 2025

Is there a way we can check if dtschema is already installed by the user? As implicit agreement to update it. pip list --user?
Otherwise skip the test with an appropriate message.

@vladimiroltean
Copy link
Contributor Author

So pip install --user is non-portable as well, it doesn't work on Debian Trixie which specifically wants a venv:

root@97008aaac20e:/app# python3 -m pip install --user dtschema
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.
    
    If you wish to install a non-Debian-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
    sure you have python3-full installed.
    
    If you wish to install a non-Debian packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.
    
    See /usr/share/doc/python3.13/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

Secondly, updating the same local package during the build still risks corrupting the dtschema state for other builds. I still think that each build needs to have its own.

@matttbe
Copy link
Member

matttbe commented Nov 7, 2025

Secondly, updating the same local package during the build still risks corrupting the dtschema state for other builds. I still think that each build needs to have its own.

Could we not simply have a requirements.txt file listing specific versions, and let the responsibility to users to install dependences?

CI and anybody else could periodically run pip install -r requirements.txt, or only when updating this repo. With the versions, we avoid having too different test envs, and updating when other builds are still running. (But I guess it is better not to change any files during builds, that's dangerous with shell scripts!)

And the good thing is that this file can also be used with venv.

@kuba-moo
Copy link
Contributor

kuba-moo commented Nov 7, 2025

Eh, I was hoping list --user would indeed fail like that outside venv but pass in venv (which was the point) but testing it now it is really just checking --user level each time. --local OTOH shows system packages when outside venv. Ugh.
How about we annotate the test in the info.json file as "should be skipped by CLI unless explicitly opted into".

Edit: there is --require-virtualenv. Are you saying with the "secondly" that that's still not good enough because user may not expect update?

@vladimiroltean
Copy link
Contributor Author

Secondly, updating the same local package during the build still risks corrupting the dtschema state for other builds. I still think that each build needs to have its own.

Could we not simply have a requirements.txt file listing specific versions, and let the responsibility to users to install dependences?

CI and anybody else could periodically run pip install -r requirements.txt, or only when updating this repo. With the versions, we avoid having too different test envs, and updating when other builds are still running. (But I guess it is better not to change any files during builds, that's dangerous with shell scripts!)

And the good thing is that this file can also be used with venv.

Sorry, but I don't understand the intersection between what problems does requirements.txt solve, and what problems we have.

We don't need specific dtschema package versions, the latest at the time of build is fine. However, the dtschema package version at the beginning of the build must be the same as the package version at the end of it. Otherwise, make dt_binding_check might run different tests for each patch, or worse, it may catch a non atomically updated package, which will fail in odd ways.

I'm not running pw_poller.py (and maybe that's part of the problem in understanding), but AFAIU based on the README, there is no time in between builds where such package updates could take place. Assuming a CI running at full steam, this could only be done in Python code by creating a venv for each Tester object (worker), which is isolated from every other worker.

@vladimiroltean
Copy link
Contributor Author

vladimiroltean commented Nov 7, 2025

Edit: there is --require-virtualenv. Are you saying with the "secondly" that that's still not good enough because user may not expect update?

No. Updating the user's packages neither helps nor inherently breaks things. I'm just saying that the CI probably needs to manage a venv for each worker, since there is otherwise no right time to update this package if installed either system-wide or at the user level, because there might be multiple workers pipelined, and there's no "RCU" for package management.
In that sense, pip3 --require-virtualenv does nothing to address this problem either, since all it does is test that we're in a venv, and if we come to the conclusion that we have to manage a virtualenv from code, we already know that we are...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants