-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Importing curation, exporers and comparison with full dependencies installed is super slow #2862
Comments
I think this happened a month ago: https://github.com/SpikeInterface/spikeinterface/actions/workflows/test_imports.yml?page=15&query= I am suspicious of this one (sadly)!: https://github.com/SpikeInterface/spikeinterface/actions/runs/8649714706 |
But this makes sense right. Previously the cache was written after the first slow import and then each subsequent import would be super fast because no jit was happening. If we are not caching that file at each import you have to compile the the jitted functions. So we are seeing the true time to import at each import now that the user will see. Is there a reason why we want n=10 instead of n=5? This means that if we edited between imports you might actually use the old version of the functions rather than the current because of the cache. Because we are developing so much it make sense to just deal with this no? |
Yeah, that's my guess, but I remember being concerned about it and finding that it was not the case. We have been discussing how to wrap numba functions so they are not costly at import time. I was concerned that if they were not wrapped properly they will getting complied at import time (and not at call time). I remember being convinced that my fears were unfounded after some experimentation but maybe they were not? I think that can be tested easily by printing all the import times intsead of just the mean and the std. The first import time should be very large and then smaller. Also, if this is the case, we should wrap numba functions. compilation should not happen at import time Concerning the number of samples I estimated at that point that they should be as large as core. 5 is fine for such small std but 10 was better and they were done before core test anyway so the extra precsion came at not cost. Now the story is differen so maybe it makes sense to reduce the statistical power. |
I think that testing was before my time, but it seems that compilation can be triggered at import time. So maybe some part of the wrapping hasn't worked? https://numba.discourse.group/t/aot-compilation-succeeds-but-import-still-slow/250/10 |
Some of them are wrapped: Others are not: I am pro-wrapping. @samuelgarcia and @DradeAW were against it (on aesthethical grounds I believe). But I have to confess that I don't have proof. This is suggestive but let's take a deeper look. |
Thanks a lot for this investigation, this is super helpfull. |
Fixed by #2932 |
Discovered in #2861. At least in two open PRs, importing these modules when the full dependecies are installed is super slow. This is the case for ubuntu and windows in two different currently opened PRs:
https://github.com/SpikeInterface/spikeinterface/actions/runs/9111761651
https://github.com/SpikeInterface/spikeinterface/actions/runs/9104003224?pr=2858
Not sure when this start happening.
To figure this out, a first start is to loook at the logs of the importing summary action . Another approach would be to use git bisect
This is something that we should track with air-speed velocity (cc @chrishalcrow )
The text was updated successfully, but these errors were encountered: