-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Extend unit tests #2640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend unit tests #2640
Conversation
Hi @qododavid! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR extends the unit tests for various components of the Prophet module, aiming to increase code coverage and validate new behaviors such as model serialization, parameter warm-starting, holiday validations, and diagnostics cross-validation.
- Added tests for warm-start parameter extraction in both MCMC and non-MCMC scenarios
- Introduced tests to verify model serialization behavior and error handling for unfitted models
- Extended Prophet validation tests covering invalid scaling parameters, holiday constraints, and diagnostics scenarios
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
python/prophet/tests/test_utilities.py | New tests for warm_start_params functionality with and without MCMC |
python/prophet/tests/test_serialize.py | Tests for conversion of fit_kwargs and model serialization error handling for unfitted models |
python/prophet/tests/test_prophet.py | Additional validations for scaling, holiday parameters, and error messaging in Prophet |
python/prophet/tests/test_diagnostics.py | New tests for diagnostics cross-validation covering import errors, cutoff validations, and logging warnings |
assert np.array_equal(coefs["coef_lower"].values, coefs["coef"].values) | ||
assert np.array_equal(coefs["coef_upper"].values, coefs["coef"].values) | ||
|
||
def test_warm_start_params_with_mcmc(self, daily_univariate_ts, backend): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The tests for warm_start_params with and without MCMC share similar setup code. Consider parameterizing these tests to reduce duplication.
Copilot uses AI. Check for mistakes.
|
||
model_dict = model_to_dict(m) | ||
|
||
assert model_dict['fit_kwargs']['init']['param1'] == [1.0, 2.0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider using np.array_equal or np.allclose to compare numpy arrays instead of direct equality to a native list to avoid potential type issues.
assert model_dict['fit_kwargs']['init']['param1'] == [1.0, 2.0] | |
assert np.array_equal(model_dict['fit_kwargs']['init']['param1'], np.array([1.0, 2.0])) |
Copilot uses AI. Check for mistakes.
if 'dask' in sys.modules: | ||
del sys.modules['dask'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Instead of directly deleting dask from sys.modules, consider using monkeypatch.delitem to avoid side effects in other tests.
if 'dask' in sys.modules: | |
del sys.modules['dask'] | |
monkeypatch.delitem(sys.modules, 'dask', raising=False) |
Copilot uses AI. Check for mistakes.
Hi, I used cover-agent to write the tests for this PR. It uses an AI-based workflow to add new test cases that increase code coverage.
All AI-generated tests have met these four essential criteria:
Increased Coverage
Why adding unit tests is important
Writing and maintaining unit tests can be tedious, especially as codebases grow larger
Despite this challenge, unit tests provide key benefits:
Tests serve as executable documentation by: