Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autoparallelization for GPU job. #278

Open
jungsdao opened this issue Jan 4, 2024 · 1 comment
Open

autoparallelization for GPU job. #278

jungsdao opened this issue Jan 4, 2024 · 1 comment

Comments

@jungsdao
Copy link
Contributor

jungsdao commented Jan 4, 2024

I have one question regarding autoparallelization using GPU.
I wanted to run two minima hopping jobs in parallel which use GPU respectively.
Is it possible using autoparallelize function in wfl? I'm not sure it's properly using GPU and seems to be not as fast as I expected from GPU job.

@bernstei
Copy link
Contributor

bernstei commented Jan 4, 2024

Note that I'm assuming you're talking about parallelization on a single node with python subprocesses. If that's not true, you should clarify.

wfl autoparallelization currently doesn't know about anything about anything. Single node parallelization just uses python's subprocess.pool to run separate python subprocesses and divides the work among them. I agree that dealing nicely with multiple GPUs sounds useful, but I'm not sure exactly how to do it. If you were to run multiple python processes on a multi-GPU node manually, how would you make sure they're each using a different GPU?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants