Conversation
There was a problem hiding this comment.
- can we collapse these into a single
int statefield so we don't need to do three volatile reads on eachread()call? - under our new threading model, do these even need to be volatile? it's expected that only one VT will interact with the reading edge of the event stream, and the VT subsystem itself must provide visibility guarantees for past actions that occurred if the VT previously ran on a different platform thread.
There was a problem hiding this comment.
Yes, I will collapse these into a single field, as for the volatility the user will get a copy of this object, I don't think we can assume that they will use a VT even if we recommend it, so I feel this way is safer, let me know if you think otherwise or if I'm missing something.
There was a problem hiding this comment.
getCount incurs a volatile read of readyLatch's backing state, please only call this if debug logs are enabled
There was a problem hiding this comment.
Can we type this queue around ByteBuffers instead? If we use ByteBuffer as a (byte[], offset, length) triple, we can allocate larger buffers less frequently in the writer, reducing our overall allocation pressure.
There was a problem hiding this comment.
Yes, we can, I will change it.
There was a problem hiding this comment.
- Duplicate of
ByteBufferUtils#getBytes - We already have a ByteBuffer, can we just push it down into
pipeStreamdirectly? Why unwrap tobyte[]s here?
There was a problem hiding this comment.
Yes, let me change the code to just pass ByteBuffer.
There was a problem hiding this comment.
nonatomic get-and-set of a volatile variable is a code smell. either use a compare-and-set primitive via AtomicIntegerFieldUpdater or AtomicBoolean, or make the field non-volatile.
There was a problem hiding this comment.
Indeed, I will fix it.
| * before any other event is written. Protocols that don't require the initial event still have | ||
| * to unlatch the writer by bootstrapping it with a null value. |
There was a problem hiding this comment.
I dislike this. We shouldn't be exposing complexities of a subset of protocols across all protocols.
There was a problem hiding this comment.
I dislike it too 😆. Do you have a suggestion to avoid it?
There was a problem hiding this comment.
TL;DR, the user has to have a reference to the writer before the writer is bootstrapped using the protocol specifics, which include whether to send the initial event in the stream.
| readyLatch.getCount()); | ||
|
|
||
| // Wait for writer to be fully setup and the initial event to be written. | ||
| readyLatch.await(); |
There was a problem hiding this comment.
this should only be possible if we expect a different thread to call bootstrap in parallel with a thread attempting to call write, but in our new VT threading model we should expect a single writer thread to drive all interactions. additionally, we should trivially be able to enforce that the first write call contains the initial event because our runtime already doesn't expose this object to users until after bootstrap is called.
was this added because the current runtime behavior requires it or because you anticipated it would be necessary?
There was a problem hiding this comment.
An instance if this class can created by the user and set in the request before it's fully bootstrapped, so yes, it can start writing in its own thread before it's ready.
| public void bootstrap(Bootstrap<IE, F> bootstrap) { | ||
| if (readyLatch.getCount() == 0) { | ||
| throw new IllegalStateException("bootstrap has been already called"); | ||
| } |
There was a problem hiding this comment.
If we truly expect parallelism, this is insufficient. You'd need an atomic mutation to guarantee readyLatch's state doesn't change between your state check, your future mutations of the latch, and other threads which may be executing the same check-and-mutate code.
There was a problem hiding this comment.
This is there just to catch an bugs from the calling code to call it twice. We don't expect any concurrent callers for this code.
There was a problem hiding this comment.
for this small a limit, we might as well make an ArrayBlockingQueue. better spatial locality and avoids per-node allocations.
There was a problem hiding this comment.
Yes, I will change it.
Issue #, if available:
Description of changes:
Implements a new event streams API that's VT friendly by blocking when the caller reads and writes events.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.