-
Notifications
You must be signed in to change notification settings - Fork 702
[ENH] test suite for pytorch-forecasting
forecasters
#1780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
return all_train_kwargs, train_kwargs_names | ||
|
||
|
||
def _integration( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking at the _integration
function in this line, I was wondering whether it can be modularized better by splitting it into smaller functions like prepare_data
, setup_trainer
etc. This might help debug errors flagged by the tests. If required, these smaller functions can be nested under a larger function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed! Was thinking the same thing.
My suggestion was to sequence this though, since across files there are multiple copy-paste-variations.
So:
- refactor with the current
_integration
into a single file and loop, taking the variable parts - then refactor into modular parts
Not sure if this is the best way - if we get stuck, we may want to try the other way round, since 2 might make 1 easier.
result = [] | ||
ROOT = str(Path(__file__).parent.parent) # package root directory | ||
|
||
def _coerce_to_str(obj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add these "coerce" functions to utils._coerce
just to make it usable all over the module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, I thought about it and would say "no", because it assumes the presence of get_tag
means we have a BaseObject
descendant instance.
The utils._coerce
makes no specific assumptions on inheritance.
I wonder how we could resolve this, maybe we move the get_tag
logic out?
I have few comments:
I think this PR can be the basis of the tests for v2 that I'll be working on. |
One more doubt: Should we add some initialisation tests as well for the models or the integration tests are enough? |
Good question - I thought we may later also attach metadata to layers, in which case the layers would also inherit from
Agreed, although I think there also should be smaller unit tests. Plus, we need to abstract out variable parts, possibly in more "test parameter getter" methods in the metadata class.
The fixture generators are generic, and could be used (by inheritance) for any base class defined by the
I think that is a good idea. For v2, I would advise to add a
More things need to change:
Yes, I would advise to branch off in a way that does not overwrite the v1 tests - which we will also need. One way to filter could be using an additional |
Yes, I think there should be init tests and unit tests as well. This PR just aimed at refactoring all the current tests - that is perhaps a good start, and we may like to complete it. For v2 though, which are written from scratch, I would strongly advise to add init and unit tests! |
This PR adds a systematic test suite and a
check_estimator
utility forpytorch-forecasting
forecasters.The interface checked is the current unified API across models. This may change in the future, but no changes to the API are made.
Work in progress.