You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, nvidia-docker-compose adds and mounts all NVIDIA volumes and devices to all services found in the docker-compose file. It would make sense to enable the user to specify which service(s) will be bound to NVIDIA. Extend the nvidia-docker-compose interface to take extra arguments to specify targets at the launch time.
The text was updated successfully, but these errors were encountered:
With version v0.4.0, you can now specify which GPU device should be included by specifying devices explicitly in docker-compose.yml. However, if you don't specify any devices, then all GPU devices will be made available to the container as was previously done before. Although I think this is an acceptable default, there is not a way to have no GPU device assigned to a service. Usually this is not an issue, but if you want to ensure that the service only uses CPU (i.e. TensorFlow defaults to using GPU if available), then not making the GPU device available to the service is the cleanest solution.
Currently,
nvidia-docker-compose
adds and mounts all NVIDIA volumes and devices to all services found in thedocker-compose
file. It would make sense to enable the user to specify which service(s) will be bound to NVIDIA. Extend thenvidia-docker-compose
interface to take extra arguments to specify targets at the launch time.The text was updated successfully, but these errors were encountered: