Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVA message ordering not guaranteed #130

Open
mdavidsaver opened this issue Oct 2, 2018 · 2 comments
Open

PVA message ordering not guaranteed #130

mdavidsaver opened this issue Oct 2, 2018 · 2 comments

Comments

@mdavidsaver
Copy link
Member

The behavior of PVA wrt. ordering of network operations is (intentionally) not well defined. eg. if the handling of a Put operation queues a Monitor before completing, it is not guaranteed that the client will see the Monitor update before the Put completion.

This is a direct result of the fair_queue behavior of the message send queue used by both client and server. This code was put in place to mitigation starvation issues like those seen with cagateway where a high bandwidth subscription will cause high latency on concurrent operations.

/** @brief An intrusive, loss-less, unbounded, round-robin queue
*
* The parameterized type 'T' must be a sub-class of @class fair_queue<T>::entry
*
* @li Intrusive. Entries in the queue must derive from @class entry
*
* @li Loss-less. An entry will be returned by pop_front() corresponding to
* each call to push_back().
*
* @li Un-bounded. There is no upper limit to the number of times an entry
* may be queued other than machine constraints.
*
* @li Round robin. The order that entries are returned may not match
* the order they were added in. "Fairness" is achived by returning
* entries in a rotating fashion based on the order in which they were
* first added. Re-adding the same entry before it is popped does not change
* this order.
* Adding [A, A, B, A, C, C] would give out [A, B, C, A, C, A].
*
* @warning Only one thread should call pop_front()
* as push_back() does not broadcast (only wakes up one waiter)
*/

This surprised and inconvenienced @thomascobb.

@mdavidsaver
Copy link
Member Author

I don't know an quick way to provide an ordering guarantee. The simply solution of removing/disabling the fair_queue risks introducing contention issues in large IOCs/gateways. It might be safe to disable fair_queue if/when per-subscription flow control was enabled by default (aka. default to "record[pipeline=true]"). Though this would not give @thomascobb the guarantee he wants as flow control might prevent a monitor update from being sent before the Put completion.

In thinking about this, I don't see any transparent solution. Either low level (eg. explicit sync. message) or high level (extra fields and a protocol between client/server) changes seem necessary.

@thomascobb
Copy link

The main use case for this is for a PV that contains a number of NTScalar structures and accepts RPC. An RPC could modify a number of these NTScalars atomically and then return when done. Ideally I would like to monitor the entire structure, see the initial update come in, do an RPC that modifies a number of sub-structures, then see a single update of these NTScalars, followed by the RPC return.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants