Skip to content

Conversation

@bdferris-v2
Copy link
Collaborator

@bdferris-v2 bdferris-v2 commented Feb 3, 2023

This allows for manually executing acceptance and end-to-end workflows.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2023

✅ Rule acceptance tests passed.
New Errors: 0 out of 1389 datasets (~0%) are invalid due to code change, which is less than the provided threshold of 1%.
Dropped Errors: 0 out of 1389 datasets (~0%) are invalid due to code change, which is less than the provided threshold of 1%.
0 out of 1389 sources (~0 %) are corrupted.
Commit: 00ce5d1
Download the full acceptance test report here (report will disappear after 90 days).
✅ Rule acceptance tests passed.

@bdferris-v2 bdferris-v2 requested a review from isabelle-dr March 6, 2023 16:04
@bdferris-v2
Copy link
Collaborator Author

@isabelle-dr this might be a better alternative to manual PRs like #1347 when you want to run the acceptance workflows

@isabelle-dr
Copy link
Contributor

Thank you!
Definitely a better alternative 🙃

@isabelle-dr
Copy link
Contributor

What would it mean in practice? For example, if I needed to do what I did in #1347 (see how many datasets trigger a particular error) with the updates you made in this PR, what would I do differently?

@bdferris-v2
Copy link
Collaborator Author

So I think there is still a bit of code I would write to help you here. Specifically, I would output the mapping for validation notices => feeds in the output comparator as an additional output. Then you could see the set of notices as generated against the master branch for each run.

(I had thought something like this already existed in the output, but after looking closer, it doesn't output quite what you need).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants