This repository was archived by the owner on Jan 5, 2024. It is now read-only.
  
  
  - 
                Notifications
    
You must be signed in to change notification settings  - Fork 12
 
    This repository was archived by the owner on Jan 5, 2024. It is now read-only.
  
  
My experience uploading a model ;) #2
Copy link
Copy link
Open
Description
Hey guys,
Great job making an automated submission pipeline! I gave it a try as I wanted to score an updated CORnet-R version that merged CORnet-S's powers with the full recurrence of CORnet-R. According to my internal measures, it's not as good as CORnet-S, but Martin encouraged me to submit it nonetheless cause it maybe helpful for somebody.
At any rate, in this issue I wanted to document several issues I encountered while trying to wrap the model for scoring (thanks for the helpful PytorchWrapper class!). Some of those issues are probably at my end but some might need your attention.
- README:
- spelling "avaiable"
 - doesn't explain how exactly I am supposed to run a test
 - links to cancidate-models and model-tools mixed up
 - might be a good idea to mention an easy way to to get a virtual environment going:
python3 -m venv .venvsource .venv/bin/activate(at least in unix-like)- (now continue with 
pip install .) 
 - Also might be good to mention a simple way to "package" you model but uploading code and weights to GitHub. It took me a bit to figure out where I could host that huge weight file.
 
 - PytorchWrapper: Doesn't work for recurrent models that return not only outputs but also states
 - Tests:
- Somehow 
xarrayversion is not properly specified. I got0.15or so installed initially (by just runningpip install .as suggested, but then got an error at some point running test, so had to downgrade to0.12. - I could not get tests working as 
from test import test_moduleskept importing sometestmodule from someplace else. I think this may be due to relative imports, but not sure. Look up in my submission what I did to resolve it. - Tests are trying to download 9.83G of imagenet. Maybe a minor point but seems like tests could be run with much less resources. Or at least there could be some "light" version of tests just to make sure everything is more or less working.
 
 - Somehow 
 - I ran out of memory on my instance when running with all layers (V1, V2, V4, IT). This is due to activations not fitting into memory when doing pca (line 65 in 
activations/pca.py). Hopefully you guys have sufficiently large instances that fit all the crazy models :) - When submitting, I choose a zip file to be uploaded and it showed its path as 
C:\fakepath\...zip:D - Now let's see if my submission doesn't crash ;)
 
Good luck!
Metadata
Metadata
Assignees
Labels
No labels