-
Notifications
You must be signed in to change notification settings - Fork 11
NERSC Open Hackathon 2024
The Hackathon is described here.
https://us06web.zoom.us/j/81999469980?pwd=1nDlwyma78Pj7XMH6OE41JKvN6iTHM.1
Meeting ID: 819 9946 9980 // Passcode: 924348
- Teams and mentors attending the event will be given access to the Perlmutter compute system during the duration of the Hackathon. Additional systems may also be made available, based on availability and interest.
- If you don’t have an account at NERSC please sign up for a training account here: https://iris.nersc.gov/train with training code dyrW. If your organization is not listed in the dropdown menu, select "NERSC".
- The training account will be active from August 6 to August 30, 2024. To log in: ssh [email protected]
- For more information visit: https://docs.nersc.gov/getting-started https://docs.nersc.gov/connect/
- If you have questions, please submit them in the Slack #perlmutter-cluster-support channel.
-
NERSC Open Hackathon Day 0 - Team/Mentor Meeting 10:30 AM PT:August 06, 2024
-
NERSC Open Hackathon Day 1:August 13, 2024, 9:00AM - 5:00PM PT
-
NERSC Open Hackathon Day 2:August 20, 2024, 9:00AM - 5:00PM PT
-
NERSC Open Hackathon Day 3:August 21, 2024, 9:00AM - 5:00PM PT
-
NERSC Open Hackathon Day 4:August 22, 2024, 9:00AM - 5:00PM PT
-
Hannah Ross (mentor)
-
Mukul Dave (mentor)
-
Steve Brandt
-
Michail Chabanov
-
Lorenzo Ennoggi
-
Roland Haas
-
Liwei Ji
-
Jay Kalinani
-
Lucas Timotheo Sanches
-
Erik Schnetter
Please join the NERSC Open Hackathon workspace using the following link: https://join.slack.com/t/nerscopenhackathon/shared_invite/zt-2nwxpsmev-GhiLfxFVsJ86UmlVH6tStQ
After joining the workspace, please search for and join the #team-asterx channel.
-
Create ET folder in the home directory:
cd ~/ mkdir ET cd ET
-
Download the code via the following commands:
curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/master/GetComponents chmod a+x GetComponents ./GetComponents --root Cactus --parallel --no-shallow https://raw.githubusercontent.com/jaykalinani/AsterX/main/Docs/thornlist/asterx.th
-
Add a
defs.local.ini
file inCactus/simfactory/etc/.
, with details on user account details, source and base directory paths. See, for example: https://github.com/jaykalinani/AsterX/blob/main/Docs/compile-notes/frontier/defs.local.ini -
Return to Cactus directory and compile using the following command:
./simfactory/bin/sim build -j32 <config_name> --thornlist=./thornlists/asterx.th
-
Example command to create-submit a job for a shocktube test via simfactory
./simfactory/bin/sim submit B1 --parfile=./arrangements/AsterX/AsterX/test/Balsara1_shocktube.par --config=<config_name> --allocation=m3374_g --procs=1 --num-threads=1 --ppn-used=1 --walltime 00:05:00
-
For a magnetized TOV test evolving spacetime, example submit command via simfactory
./simfactory/bin/sim submit magTOV_unigrid --parfile=./arrangements/AsterX/AsterX/par/magTOV_unigrid.par --config=<config_name> --allocation=m3374 --procs=4 --num-threads=1 --ppn-used=4 --walltime 00:10:00
Old Perlmutter presentations say that to use CUDA-aware MPI which should show up as libmpi_gtl_cuda.so.0
in ldd
output. This is said to require loading the correct craype-accell-
module:
To build with CUDA-awareness, load the relevant craype-accel-* module before linking. (For Perlmutter, this is craype-accel-nvidia80)
At runtime one should then
export MPICH_GPU_SUPPORT_ENABLED=1
The build system ultimately uses just make
but with some auto-generated code (which is not in the performance critical path). Environment modules being loaded are those in the envsetup
entry in the file simfactory/mdb/machines/perlmutter-p1.ini
. Compilers and other options are in simfactory/mdb/optionlists/perlmutter-p1.cfg
. SLURM script is in simfactory/mdb/submitscript/perlmutter-p1.sub
and it runs the actual code as a bash script in simfactory/mdb/runscripts/perlmutter-p1.run
.
Changing compile time options is achieve by (one way) editing simfactory/mdb/optionlists/perlmutter-p1.cfg
then running
./simfactory/bin/sim build -j32 <config_name> --thornlist=./thornlists/asterx.th --reconfig --optionlist perlmutter-p1.cfg
For a more traditional build process (without the extra "simfactory" layer) one can also manually load the modules listed in the envsetup
key. Then use
make foo-config options=simfactory/mdb/optionlists/perlmutter-p1.cfg
make -j32 foo
to build configuration foo
in exe/cactus_foo
. Editing options requires re-running the -config
step.
Helpful make targets are make foo-clean
and make foo-realclean
, as well as make foo-build BUILDLIST=<Thorn>
where thorn could be eg AsterX
which builds only that one module (eg for testing compiler options).
Set
export VERBOSE=yes
to see the exact commands make executes.
- AsterX Team introduction slides here