This project provides tools to run computational tasks on an HPC cluster managed by SLURM or on your local machine. Follow the instructions below to get started.
If you do not have Miniconda installed:
- Go to the Miniconda download page.
- Download and install the appropriate version for your operating system.
- Follow the installation instructions for your operating system.
This project uses a requirements.yml
(yml
not txt
) file to set up the Python environment with all necessary libraries and dependencies.
-
Open your terminal or command prompt.
-
Navigate to the project directory (where this
README.md
is located). -
Run the following commands:
conda env create -f requirements.yml
This will automatically create a Conda environment with the name specified in
requirements.yml
. If you need a custom environment name, run:# conda env create -f requirements.yml --name <your_env_name>
-
Activate the newly created environment:
conda activate mpoctrl_env
Replace
<env_name>
with the environment name specified inrequirements.yml
or the one you provided. -
Test your environment
python test_env.py # should output "All libraries are installed correctly"
The HPC is assumed to be managed by SLURM. Follow these steps to submit your jobs:
Choose the appropriate Python script based on your HPC's GPU availability:
- For GPU-enabled HPC:
submit_jobs_to_hpc_gpu.py
- For CPU-only HPC:
submit_jobs_to_hpc_cpu.py
Open the respective script and update the following parameters:
- Input Data Directory: Path to your input data folder.
- Output Directory: Path to the folder where results will be saved.
- Data Names: Names of the data files or datasets.
- Email Address: Your email address to receive job notifications.
- ** other parameters as needed
Run the appropriate Python script to submit the job:
python submit_jobs_to_hpc_gpu.py
or
# python submit_jobs_to_hpc_cpu.py
The job submission workflow involves the following:
submit_jobs_to_hpc_gpu.py
orsubmit_jobs_to_hpc_cpu.py
: Prepares and submits your job to SLURM.run_on_hpc_gpu.sh
orrun_on_hpc_cpu.sh
: Executes the task on the HPC, passing data-related parameters to the main program.src/main.py
: The main Python file where the algorithm runs.
If you want to run the code on your local laptop:
Open the shell script run_on_local.sh
and update the following:
- Input Data Directory: Path to your input data folder.
- Output Directory: Path to the folder where results will be saved.
- Data Names: Names of the data files or datasets.
Execute the shell script using the following command:
sh run_on_local.sh
The algorithm will generate an output results directory with a name in the format:
<data_name>-<network_name>-Flux-<time>
You will find all result files in this directory. The structure includes:
- Computation results
- Logs
- Any other generated files
- HPC Workflow:
submit_jobs_to_hpc_gpu.py
(orsubmit_jobs_to_hpc_cpu.py
) →run_on_hpc_gpu.sh
(orrun_on_hpc_cpu.sh
) →src/main.py
- Local Workflow:
sh run_on_local.sh
Feel free to reach out if you encounter any issues!