Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared Atomic Locks #88

Open
DavePearce opened this issue Feb 25, 2021 · 11 comments
Open

Shared Atomic Locks #88

DavePearce opened this issue Feb 25, 2021 · 11 comments

Comments

@DavePearce
Copy link
Member

(see also #31, #45)

Implementing either atomic blocks or atomic global reads / writes requires some top-level lock(s). For example, writing to a shared global variable requires a lock on that variable. Likewise, writing to a shared object requires a lock on that object (e.g. monitor in Java). Therefore, we need some mechanism to identify data which is shared, versus that which is not.

For global variables, it's pretty easy if we add a shared modify. For references, we could consider e.g. &shared int ?

@DavePearce
Copy link
Member Author

DavePearce commented Jul 1, 2021

@dumblob
Copy link

dumblob commented Jul 1, 2021

Very informative article, thanks for sharing!

@DavePearce
Copy link
Member Author

Yeah, its pretty interesting!!

@DavePearce
Copy link
Member Author

The second part of the series is now available as well!

https://research.swtch.com/plmm

@dumblob
Copy link

dumblob commented Jul 10, 2021

Yep, read it in one breath 😉.

Btw. today I looked at Armada and watched a 1 year old video https://www.youtube.com/watch?v=CvfFtkAQho4 and thought it might provide some insight how others try to prove concurrency correctness without sacrificing flexibility (especially clever optimizations - e.g. first reading a shared variable without locking it - to get a "cached/stale" information as a "good guess" - and first then locking it and reading it again to get a truly correct non-stale value).

@DavePearce
Copy link
Member Author

I just finished it this morning! Already looking forward to part 3.

This stuff is definitely helping me. One question I have from all this is whether a PL should allow racy behavior at all?

@DavePearce
Copy link
Member Author

I'll check out that video/post

@dumblob
Copy link

dumblob commented Jul 11, 2021

This stuff is definitely helping me. One question I have from all this is whether a PL should allow racy behavior at all?

I think this highly depends on the definition of "racy". Because e.g. the optimization I outlined above could be considered "racy" but the desired functionality the programmer intended is in the end not racy. So what does it mean "racy"? I've seen already too many assumptions about "raciness".

Assuming the PL allows to express absolutely exactly what the programmer had in mind (please do not confuse with turing completeness which is orthogonal to this ability of expressing of what the programmer wants), then I'm pretty sure the PL shouldn't allow racy programs (we're talking about default behavior of a PL without considering features like unsafe { ... } blocks known from certain PLs etc.). But because I know maybe just one and only one PL able to express really anything (XL), I think we'll first need to define what does it mean "racy" in Whiley.


Under "ability to express what the programmer has in mind" I mean both what she wants as well as what she doesn't want in the given particular case. The first condition ("what she wants") is theoretically satisfied by any turing-complete language (though often not nicely readable 😉 - imagine assembly), but the second condition is usually the problem because a PL imposes certain restrictions in certain cases (e.g. "structured programming" etc.) and allows the programmer to add only a small amount of additional restrictions for the given localized case (e.g. "static assert") instead of giving her full power of arbitrary edition of the restrictions (remove existing, edit existing, add new ones) as e.g. XL does.

Btw. this is not criticism of languages but rather a substantiation of why we need to define what it means "racy" for a particular PL.

@DavePearce
Copy link
Member Author

Right, I'm hearing you. Ok, here's my thoughts:

  • (Shared Variables) We allow some variables to be declared as shared (e.g. shared int x = 0). This means they are atomic variables in the sense of the article above.
  • (Thread Local Variables) All other variables are thread local. In particular, this means we cannot race on them.
  • (Atomic Blocks). We allow a limited form of atomic block which restricts what can actually be done inside the block (need to figure that out). But, critically, we can express compare and swap.

By construction, we cannot race on any variables. This may seem like a limitation as we cannot benefit from reading/writing non-atomic variables when we know this is safe to do so. However, I think actually it does allow this. For example:

shared {int x, int y} var = ...

This is a share variable which, actually, is made up of two variables. We could implement this in different ways. For example (in C like syntax):

atomic int lock
int x;
int y;

The intention here is that lock synchronises all accesses to x and y and, thus, guarantees race freedom (provided the generated code does it correctly).

@dumblob
Copy link

dumblob commented Jul 13, 2021

That sounds actually familiar - the language V provides somewhat similar primitives though thread-local will most probably not be supported outside of unsafe{ } blocks because it'd break the native go routine abstraction (which is assumed to be implemented as fully preemptive green threads on top of os threads).

@DavePearce
Copy link
Member Author

Ok, interesting. I have not heard of V before!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants