-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dockerfiles to ubuntu-20.04 and autoconf #215
base: master
Are you sure you want to change the base?
Conversation
The nvidia/cuda:10.2 image disappeared from the hub, so the GPU versions in master were not buildable anymore. Also, the two that build bifrost were not yet using the configure script. However, the full-build `.gpu` version doesn't quite work because nvidia-docker seems to provide access to GPU during run phase, but not during build phase. Perhaps relevant: https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime The `_prereq.gpu` version builds fully and is still helpful. Also pinging PR #92 which is outdated but relevant.
Codecov ReportPatch and project coverage have no change.
Additional details and impacted files@@ Coverage Diff @@
## master #215 +/- ##
=======================================
Coverage 65.39% 65.39%
=======================================
Files 67 67
Lines 5840 5840
=======================================
Hits 3819 3819
Misses 2021 2021 ☔ View full report in Codecov by Sentry. |
Oh look, it's our good friend the "test_matmul_aa_ci8" test failure (#187). |
For the build of .gpu could we just specify all supported SMs? And then set a reasonable default for the shared memory per SM? I think those are the only two things that require a working GPU. |
Should we turn the docker builds into CI tests? Maybe only in master? |
Also write `_all_output.ipynb` to temporary dir rather than tutorial dir, so they don't match the glob again the next time. (Not an issue on ephemeral containers/CI tests, but with a persistent source dir they accumulate like `_all_output_all_output_all_output`. :)
I'm mixing another idea into this dockerfile PR, but it's minor stuff I encountered about test conditions while figuring out the prereqs in docker. Re: building with Dockerfile.gpu, yes I think specifying to configure will work, will try. It's similar to what we need with nix because of hermetic build where we can't directly query hardware capabilities. Re: building docker in CI: I wasn't sure whether docker-build works within docker but I guess it might. May be worthwhile if we want to support these things. (They are still mentioned in the README, and I guess I'm still finding them convenient for testing and playing around.) |
Maybe this could help with the docker stuff: https://github.com/marketplace/actions/build-and-push-docker-images |
Maybe we should go up to Jammy on this. |
The nvidia/cuda:10.2 image disappeared from the hub, so the GPU versions in master were not buildable anymore. Also, the two that build bifrost were not yet using the configure script.
However, the full-build
.gpu
version doesn't quite work because nvidia-docker seems to provide access to GPU during run phase, but not during build phase. Perhaps relevant:https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime
The
_prereq.gpu
version builds fully and is still helpful.Also pinging PR #92 which is outdated but relevant.