-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributing the software #21
Comments
What are actually the concrete build parameters we need to care about? Just GPU arch × accel backend? How large would a "build matrix" be? I think we have the following options:
1 and 2 can also be used to build a bunch of "generic" packages and publish them to PyPI. 1) Complete build with pipThere is a way to parametrize a pip build, for example by environment variables, but building the complete native module with Python tools will be a headache and also, IMHO that is quite a hack. This would depend on system installations for dependencies (boost etc.) 2) Build with a combination of CMake and pip/setuptoolsOne option that would work is: get the source code, build a native library with CMake for a specific environment (cuda version etc.) and package it into a pip-installable wheel. Like 1, this would depend on system libraries for dependencies. 3) Build and publish conda packagesI think with conda the problem would also be parametrizing the build - I don't think conda supports the "compile at install time" workflow that pip supports, so we would be limited to a small number of supported configurations with the published packages. 4) Use conda just for installing the build and run-time dependenciesThis would mean we use conda packages for dependencies, but locally build and install a specialized, optimized version, with CMake and pip for example. That would mean users don't have to compile boost... Use cases?Can we maybe support two different use cases - one being the easy workstation/laptop/"casual" installation for trying things out, the other being the thoroughly optimized HPC installation? Then we can provide binary wheels for some common configurations and still allow optimized installation for the HPC case. |
/cc @ReimarBauer, who is a conda expert |
Theoretically, the build matrix can becomes really big. In real, I'm not sure, because we want to support consumer, workstation and server GPUs. CPU support could be also possible and maybe we have to add support for AMD GPUs. I want to avoid this restriction, because it could cause some restrictions in future systems. In general, if we use a package manager, we should to try ship every dependency, which is possible, with the package manager, which means also boost. I like your idea of two different ways, to get the application. But at first, we should try to realize a single parametrized installation with conda or pip. If it doesn't work or it is to complicated to use, we can do your idea and provide to different ways, to install the application. This means an easy way for common configurations over pip and a more complicated way for a optimized version. Beside, for alpaka-based applications we use cmake arguments to enable backends and set compiler optimizations. So, we need a package manager, which supports cmake builds. I will also talk with my colleagues, if we have experience to ship alpaka applications beside ugly cmake builds. |
@SimeonEhrig would it have to be compiled for each individual GPU model or does it work like CPUs where there are certain instruction sets that work on many different models? |
The instructions set of the nvidia GPUs are forward compatible. Means, a application code, which was compiled for SM60 also runs on GPUs, which supports SM70. But you can lose optimization potential. |
Just for documentation purposes: We have discussed offline that it is possible to store different SM versions of the kernel code in one executable file. So only one CUDA package is needed. |
About
At the moment, LiberTEM is distributed via pip. For this project, pip could not be the right solution, because we have additional needs for the alpaka backend:
[1] automatic detecting at build time is not a good idea because a usual workflow on HPC is installing the packages on the login node (with has no GPUs) and allocate GPUs afterwards
Prerequirements
Develop a dummy alpaka backend function with python binding #10
Potential candidates
The text was updated successfully, but these errors were encountered: