Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inference of UCE on multiple GPUs #55

Open
gmurtaza404 opened this issue Feb 9, 2025 · 1 comment
Open

Running inference of UCE on multiple GPUs #55

gmurtaza404 opened this issue Feb 9, 2025 · 1 comment

Comments

@gmurtaza404
Copy link

Hello!

When I try to run inference with multiple GPUs, the code crashes with the error:

"Traceback (most recent call last):
File "/UCE/eval_single_anndata.py", line 155, in
main(args, accelerator)
File "
/UCE/eval_single_anndata.py", line 85, in main
processor.run_evaluation()
File "/UCE/evaluate.py", line 145, in run_evaluation
run_eval(self.adata, self.name, self.pe_idx_path, self.chroms_path,
File "
/UCE/evaluate.py", line 239, in run_eval
batch_sentences = model.module.pe_embedding(batch_sentences.long())
^^^^^^^^^^^^
File "***/UCE/env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'TransformerModel' object has no attribute 'module'. Did you mean: 'modules'?

Any suggestions on how to fix this?

@Yanay1
Copy link
Collaborator

Yanay1 commented Feb 10, 2025

Hi, please add the argument --multi_gpu=True and run the command with accelerate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants