-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add profile for arcc #622
add profile for arcc #622
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall good config and docs, but I'm not particularly happy with the custom parameters... I've left suggestions for alternative strategies
docs/arcc.md
Outdated
|
||
## Parameters of Interest | ||
|
||
#### `--slurm_account` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the use of these custom parameters. They may result in red herring warnings in both nextflow and nf-core pipelines as they aren't in the code itself/schema. Although it's probably solved by adding to the ignore parameters, you have the risk of someone specifies their own custom config to ignore other parameters, you would lose your ignore list.
We've recommended to others to ask users to specify bash env variables with the information, and pull these in the config with the groovy function I've forgotten the name of now... But I'm pretty sure @nvnieuwk and @matthdsm use in the VSC Ghent profile (?)
P.s. it's 5am and I'm on my phone thus the less than ideal referencing 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen mixed usage here. I think I could probably get away with a SBATCH_ACCOUNT
env variable to be consistent with other configs
conf/arcc.config
Outdated
|
||
slurm_account = 'healthdatasci' | ||
slurm_opts = '' | ||
slurm_queue = 'teton,moran' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than specifying this with custom parameters, why not simple sub profiles within this config, so: -profile arcc,teton
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens then both partitions are to be used?
Should I create a teton_moran
profile along with individual profiles?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it would be ok to just have a default that uses the three partitions
-profile arcc
Then use the additional profiles to explicitily set the partition
-profile arcc,beartooth_bigmem
Haha I'm trying to avoid
-partition arcc,beartooth_moran_teton
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for spamming, but the reasoning for putting multiple partitions is to squeeze small jobs into available nodes with little wait time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think the profile example makes sense 👍
conf/arcc.config
Outdated
singularity_pull_docker_container = true | ||
|
||
slurm_account = 'healthdatasci' | ||
slurm_opts = '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What sort of things do you think would be going here? Isn't there a risk of a clash with existing opts that Slurm specifies by default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here would be additional GPU related options
@jfy133 thank you very much for the suggestions and feedback, I will make adjustments |
@jfy133 I think I addressed most things Any constructive feedback would be much appreciated! Haha like I said learning what best approach is here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's OK, assuming it's been tested and working as intended 👍
Co-authored-by: James A. Fellows Yates <[email protected]>
OK! Thank you so much for all the feedback @jfy133 Did a bunch of testing. I tested each profile, found a few that had been retired but still listed in docs. I also verfied the
I think this will work out quite nicely, thank you very much for the profile suggestion! |
Go ahead and merge once you're ready! |
Thank you @jfy133 ! |
name: arcc
about: A profile for the ARCC HPC system at the University of Wyoming, USA
NOTE: This is my first submission to nf-core/configs. Any feedback or best practices will be most welcomed!
Please follow these steps before submitting your PR:
[WIP]
in its titlemaster
branchSteps for adding a new config profile:
conf/
directorydocs/
directorynfcore_custom.config
file in the top-level directoryREADME.md
file in the top-level directoryprofile:
scope in.github/workflows/main.yml