-
Notifications
You must be signed in to change notification settings - Fork 581
[Logging] Add fuzz-related logs context to corpus pruning task #4774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
if self == LogContextType.FUZZ: | ||
try: | ||
return FuzzLogStruct( | ||
fuzzer=log_contexts.meta.get('fuzzer_name', 'null'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WHy not return None instead of "null"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure how the gcp logging treats fields that are set to None, my guess is that it omits that field. IMO it's better to have this default value meaning that the information is missing/not applicable than omitting it, which could mean that something went wrong during logging.
( I also used this piece of code as reference: https://github.com/google/clusterfuzz/blob/master/src/clusterfuzz/_internal/metrics/logs.py#L420 )
}, | ||
}) | ||
|
||
def test_fuzz_logs_context(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tests the log being emited within the log context. We do not have integration tests that go through pre/main/postprocess though, so we are not testing the actual task execution flow here.
Can we get some evidence of this running on a candidate release of some sort, to make sure nothing weird happens and we actually get the logs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we have this gap of integration tests that hinders our testing capabilities for these kind of changes.
I will do a onebox deployment with candidate and paste the logs here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I executed the task preprocess stage locally and pasted the logs in the PR description (tyvm for the help on how to do this @vitorguidi 🤝)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the thorough testing! lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Receive it!
Description
Differently from the tasks instrumented with logs context so far, corpus pruning (and fuzz task) is not based on a testcase. These two in particular operate on top of fuzzers, jobs and fuzz targets. This PR:
Tests
Since corpus pruning runs on batch, a onebox deployment with candidate was not viable. Thus, the test was done by running the task's preprocess locally with the debugger and sending the logs to GCP. Evidence from the logs:
