You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have one question regarding autoparallelization using GPU.
I wanted to run two minima hopping jobs in parallel which use GPU respectively.
Is it possible using autoparallelize function in wfl? I'm not sure it's properly using GPU and seems to be not as fast as I expected from GPU job.
The text was updated successfully, but these errors were encountered:
Note that I'm assuming you're talking about parallelization on a single node with python subprocesses. If that's not true, you should clarify.
wfl autoparallelization currently doesn't know about anything about anything. Single node parallelization just uses python's subprocess.pool to run separate python subprocesses and divides the work among them. I agree that dealing nicely with multiple GPUs sounds useful, but I'm not sure exactly how to do it. If you were to run multiple python processes on a multi-GPU node manually, how would you make sure they're each using a different GPU?
I have one question regarding autoparallelization using GPU.
I wanted to run two minima hopping jobs in parallel which use GPU respectively.
Is it possible using
autoparallelize
function in wfl? I'm not sure it's properly using GPU and seems to be not as fast as I expected from GPU job.The text was updated successfully, but these errors were encountered: