You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, the application can very easily be ran on standard HPC architectures using standard HPC job schedulers (e.g. slurm/LSF). However, in the current implementation, the whole pipeline is ran as one job, with a single set of core requirements, and a single set of memory requirements for each rule.
However, Snakemake allows developers to implement HPC resource management and submission on a rule-by-rule basis. Taking advantage of this feature will reduce the overall cost of FilTar HPC submissions, likely reduce HPC queue waiting times, and probably improve pipeline logging/administration easier as well
The text was updated successfully, but these errors were encountered:
At the moment, the application can very easily be ran on standard HPC architectures using standard HPC job schedulers (e.g. slurm/LSF). However, in the current implementation, the whole pipeline is ran as one job, with a single set of core requirements, and a single set of memory requirements for each rule.
However, Snakemake allows developers to implement HPC resource management and submission on a rule-by-rule basis. Taking advantage of this feature will reduce the overall cost of FilTar HPC submissions, likely reduce HPC queue waiting times, and probably improve pipeline logging/administration easier as well
The text was updated successfully, but these errors were encountered: