You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you run multiple pipeline-run-pair utilities in parallel, there will be a race condition in the local mongo files that causes some results not to write. This is also relevant to #1, as we are having renewed discussion regarding publishing KIM tests and VCs as a standalone package. If that package will include a local mongo, this will come up again.
I think a basic solution would be to write multiprocessing versions of pipeline-run-* commands that lock (this would have to propagate through a few calls to get to every place db is updated in mongodb.py).
The text was updated successfully, but these errors were encountered:
On Mon, May 1, 2023 at 1:43 PM Daniel S. Karls ***@***.***> wrote:
Can you give a specific example of such a race condition? I would've
thought that Mongo was already using exclusive locks to prevent dirty
writes.
—
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AS2TGGDQVGOWZH7YL5VBJJ3XD776TANCNFSM6AAAAAAXSBRTEU>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
If you run multiple pipeline-run-pair utilities in parallel, there will be a race condition in the local mongo files that causes some results not to write. This is also relevant to #1, as we are having renewed discussion regarding publishing KIM tests and VCs as a standalone package. If that package will include a local mongo, this will come up again.
I think a basic solution would be to write multiprocessing versions of pipeline-run-* commands that lock (this would have to propagate through a few calls to get to every place db is updated in mongodb.py).
The text was updated successfully, but these errors were encountered: