-
Notifications
You must be signed in to change notification settings - Fork 11
Getting Started
CarpetX requires:
- a C++17 compiler, GNU compilers version 10 or later are known to work
- if using CUDA, a new enough CUDA toolkit (version 11 at least to support C++17). Please see below for extra modifications required in that case.
- access to a number of external libraries
- ADIOS2
- BLAS / LAPACK
- AMReX
- GSL
- FFTW3
- hwloc
- HDF5
- NSIMD
- openPMD
- ssht
- Silo
- yaml_cpp
- zlib
- modern version of CMake for the external libraries
These can optionally be installed by hand to speed up compilation by inspecting how this is done in the azure-pipelines/Dockerfile
file or using GetComponents and a thornlist.
Using a thornlist the steps on a typical Ubuntu/Linux workstation with packages installed as documented are:
curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/master/GetComponents
chmod a+x GetComponents
./GetComponents --root Cactus --parallel --no-shallow https://github.com/eschnett/CarpetX/wiki/files/carpetx.th
cd Cactus
./simfactory/bin//sim setup-silent
then edit the file simfactory/mdb/optionlists/generic.cfg
and change CXXFLAGS
to
CXXFLAGS=-g -std=gnu++17
ie use C++17 instead of C++11. If using gcc 8 then you will also have to explicitly link against -lstdc++fs by adding a line:
LIBS = gfortran stdc++fs
to avoid link time errors.
Then build:
./simfactory/bin/sim build --thornlist thornlists/carpetx.th
This will provide a minimal working version of CarpetX sufficient to get started but not very optimized an not using CUDA.
If you are using a cluster then you will have to yourself make sure to use the correct option list etc.
Run a test:
# or however many threads you would like to use
OMP_NUM_THREADS=4 exe/cactus_sim repos/carpetx/WaveToyX/par/reflecting.par
which will run until cctk_final_time=1.0
.
These steps are for typical Ubuntu/Linux workstations and assume the same setup as the preceding section.
When building AMReX, you need to decide whether CUDA is supported or not. If CUDA is supported, then a GPU is required at run time. It is not possible to configure AMReX with CUDA, and then to run without CUDA. In practice, I thus have two separate AMReX (and two separate Einstein Toolkit) builds.
This is mostly untested, so please update as you find things out.
To support CUDA some changes are required, currently also a different branch of AMReX:
cd repos/cactusamrex
git checkout rhaas/cuda
cd ../../
This branch contains branches of the GPU using thorns in cactusamrex that build using CUCC
instead of CXX
when AMREX_ENABLDE_GPU
is true. They rely on thorn CUDA
(see below) to correctly link.
Then make a copy of the generic option list used by simfactory:
cp simfactory/mdb/optionlists/generic.cfg simfactory/mdb/optionlists/generic-cuda.cfg
and edit the copy.
First make sure to add to CPPFLAGS so that it includes:
CPPFLAGS = -DSIMD_CPU
then add at the end:
AMREX_ENABLE_CUDA=yes
CUCC = nvcc
CUCCFLAGS = --compiler-bindir g++ -x cu -g -std=c++17 -D_GNU_SOURCE --expt-relaxed-constexpr --extended-lambda --forward-unknown-to-host-compiler --Werror cross-execution-space-call --Werror ext-lambda-captures-this --relocatable-device-code=true --objdir-as-tempdir
CUCC_OPTIMISE_FLAGS=-O3 -fopenmp
CUCC_PROFILE_FLAGS = -pg
CUCC_WARN_FLAGS = -Wall
CUCC_OPENMP_FLAGS = -fopenmp
DISABLE_INT16=yes
DISABLE_REAL16=yes
making sure to retain the previously applied change for CXXFLAGS
and possibly
LIBS
(if using gcc-8):
CXXFLAGS=-g -std=gnu++17
LIBS = gfortran stdc++fs
Then enable CUDA in the thornlist by adding
enabled-thorns = ExternalLibraries/CUDA
to the [default]
section of simfactory2/etc/defs.local.ini
(or the section named after your workstation). This thorn adjusts Cactus' link rules so that CUDA code is correctly linked.
Finally create a fresh configuration
./simfactory/bin/sim build --thornlist thornlists/carpetx.th --optionlist simfactory/mdb/optionlists/generic-cuda.cfg
You will only have to pass the --thornlist
and --optionlist
options for the first build since simfactory makes copies of the files and stores them in the configuration.
Avoid working on /home
as it has limited storage. Instead, create your workspace with your user name at:
mkdir /ddnA/project/sbrandt/carpetx/$USER
cd /ddnA/project/sbrandt/carpetx/$USER
Deep Bayou uses the thornlist in the CarpetX wiki Getting Started document:
curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/master/GetComponents
chmod a+x GetComponents
./GetComponents https://github.com/eschnett/CarpetX/wiki/files/carpetx.th
cd Cactus
./simfactory/bin/sim setup-silent
Set your user name, email, and allocation in ./simfactory/etc/defs.local.ini
[default]
user = YOUR_USERNAME
email = YOUR_EMAIL
allocation = hpc_et_test3
sourcebasedir = /ddnA/project/sbrandt/carpetx/$USER
./simfactory/bin/sim build sim-gpu --machine db-sing-nv --thornlist thornlists/carpetx.th
./simfactory/bin/sim create-submit shocktube --procs=48 --ppn-used=48 --num-threads=1 --config sim-gpu --machine db-sing-nv --par repos/cactusamrex/AsterX/par/shocktube.par
After watching Matt Anderson's talk at the 20222 ET Workshop, I realized that we can use the image from the workshop on any cluster that supports singularity. I tested that I can build and run with Singularity on Deep Bayou. These were my steps.
I plan to modify simfactory
so that it can understand singularity sometime in the near future.
srun -A hpc_et_test2 -n 1 --pty singularity build -F /work/sbrandt/images/etworkshop.simg docker://stevenrbrandt/etworkshop
Note that the image (etworkshop.simg) must be in the singularity
group if you are running on Deep Bayou, you won't be able to run it. The reason for this is that the HPC administrators at LSU wish to limit who can create images. Note also that on Deep Bayou, you can only access singularity through slurm
.
export CACTUS_SING="srun -A hpc_et_test2 -p checkpt -N 1 --cpus-per-task 48 --ntasks-per-node 1 --pty singularity exec --nv --bind /var/spool --
bind /project/sbrandt --bind /etc/ssh/ssh_known_hosts --bind /work --bind /scratch /work/sbrandt/images/etworkshop.simg"
$CACTUS_SING build -j10 $CACTUS_SIM --thornlist $CACTUS_THORNLIST --optionlist /usr/carpetx-spack/local-gpu.cfg
Note that by default, your home directory is mounted into the Singularity image. The --bind
option allows you to add other directories. The --nv
option enables the nvidia drivers. If you run on a cpu, you don't need that (actually, you don't need it set to compile, but I think its easier to set CACTUS_SING for compiling and running). If you run on the cpu, use optionlist /usr/carpetx-spack/local-cpu.cfg
Yes, this needs to go through simfactory
eventually...
$CACTUS_SING ./exe/cactus_sim-gpu parfile.par
This works because mpich
(and its cousins) have worked hard to be interoperable between various versions through slurm
and pmi2
. It's magic!
Note: If debugging with nsys on Deep Bayou, please set and export the TMPDIR variable to something like /work/$USER/tmp. The nsys program generates lots of data and causes trouble for the /tmp file system.
Avoid working on /home
as it has limited storage. Instead, create your workspace with your user name at:
mkdir /ddnA/project/sbrandt/carpetx/$USER
cd /ddnA/project/sbrandt/carpetx/$USER
Deep Bayou uses the thornlist in azure-pipelines/carpetx.th
:
curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/ET_2021_05/GetComponents
chmod a+x GetComponents
./GetComponents https://bitbucket.org/eschnett/cactusamrex/raw/master/azure-pipelines/carpetx.th
cd Cactus
./simfactory/bin//sim setup-silent
Which uses the master
branch of all thorns and helper libraries in CarpetX, that is it does not use rhaas/cuda
or the ExternalLibraries in rhaas80's GitHub account.
Note that on Deep Bayou (db1.hpc.lsu.edu), the following cfg files are setup for CarpetX.
# /project/sbrandt/cfgs/spack.cfg # Use the CPU only
# /project/sbrandt/cfgs/spack-cuda.cfg # Use the GPU
# /work/eschnett/Cactus/simfactory/mdb/optionlists/db-gpu.cfg # for hackathon
/ddnA/project/sbrandt/carpetx/$USER/Cactus/repos/cactusamrex/azure-pipelines/deep-bayou.cfg # recommended
To use it, add these lines to your simfactory/etc/defs.local.ini
:
[db1.hpc.lsu.edu]
allocation = hpc_et_test2
# Set the relative path to your option list from the Cactus dir:
optionlist = repos/cactusamrex/azure-pipelines/deep-bayou.cfg
runscript = /project/sbrandt/carpetx/flopezar/db-gpu.run
sourcebasedir = /ddnA/project/sbrandt/carpetx/@USER@
basedir = /ddnA/project/sbrandt/carpetx/@USER@/simulations
envsetup = <<EOF
export SPACK_ROOT=/project/sbrandt/spack
source $SPACK_ROOT/share/spack/setup-env.sh
eval `spack --config-scope /home/sbrandt/.spack load --sh mpich`
eval `spack --config-scope /home/sbrandt/.spack load --sh yaml-cpp`
EOF
# ExternalLibrariesCUDA is used only with branch rhaas/cuda, not with master
#enabled-thorns = <<EOF
# ExternalLibraries/CUDA
# ExternalLibraries/RePrimAnd
#EOF
Then compile using:
./simfactory/bin/sim build --thornlist thornlists/carpetx.th
To submit your run, use simfactory submit
, or ask for an interactive sesion.
For the latter, Deep Bayou uses slurm, so you can enter the batch queue as follows. See [email protected]
to get an allocation or space on the /project
folder.
srun -A hpc_et_test2 -N 1 --pty bash
To actually run your code, you need to do something like this:
export SPACK_ROOT=/project/sbrandt/spack
source $SPACK_ROOT/share/spack/setup-env.sh
eval `spack --config-scope /home/sbrandt/.spack load --sh mpich`
eval `spack --config-scope /home/sbrandt/.spack load --sh yaml-cpp`
mpirun -np 2 ./exe/cactus_sim parfile
There is project disk space available (work will be deleted after 60 days)
Just make a directory as follows
mkdir -p /project/sbrandt/carpetx/$USER
And then work in /project/sbrandt/carpetx/$USER
If Deep Bayou is not working for some reason, you can use smic (smic.hpc.lsu.edu). It has older GPUs, but it should work. I was able to compile carpetx (with no petsc), using the following config:
/project/sbrandt/spack-smic/local.cfg