Skip to content

GPU models and configuration:$ nvidia smi

RachelChen edited this page Jul 15, 2018 · 5 revisions

1. Set SHELL env = /bin/bash**

history 1 echo $SHELL 2 SHELL=/bin/bash

2. Setting up tux.config **

leichen@gpu-compute4$ emacs .tmux.config leichen@gpu-compute4$ pwd /home/home2/leichen leichen@gpu-compute4$ echo $SHELL /bin/tcsh leichen@gpu-compute4$ SHELL=/bin/bash leichen@gpu-compute4$ echo $SHELL /bin/bash From man tmux:

default-shell path Specify the default shell. This is used as the login shell for new windows when the default-command option is set to empty, and must be the full path of the executable. When started tmux tries to set a default value from the first suitable of the SHELL environment variable, the shell returned by getpwuid(3), or /bin/sh. This option should be configured when tmux is used as a login shell. So, in your tmux.conf:

** Retach userspaces set -g default-command "reattach-to-user-namespace -l zsh" https://stackoverflow.com/questions/23318284/change-tmux-default-to-zsh

24 tmux show -g | cat > ~/.tmux.config

3. CUDA_VISIBLE_DEVICES**

The problem was caused by not setting the CUDA_VISIBLE_DEVICES variable within the shell correctly.

To specify CUDA device 1 for example, you would set the CUDA_VISIBLE_DEVICES using

export CUDA_VISIBLE_DEVICES=1 or

CUDA_VISIBLE_DEVICES=1 ./cuda_executable The former sets the variable for the life of the current shell, the latter only for the lifespan of that particular executable invocation.

If you want to specify more than one device, use

export CUDA_VISIBLE_DEVICES=0,1 or

CUDA_VISIBLE_DEVICES=0,1 ./cuda_executable