In my experience Left-Write is the clear place to start. It's general purpose and fast for reads. Only if that is unsuitable (eg memory usage) does one design a custom data structure. There's a Rust lib implementation with links to more resources:
I only learned about the left-right schema after finishing my design. I thought I came up with something novel - it turns out it was just my ignorance:)
Still, I think the left-right crate could benefit from one optimization I came up with. This is what the lib says in the docs:
> When WriteHandle::publish is called, the writer, atomically swaps the reader pointer to point to the other T. It then waits for the epochs of all current readers to change, and then replays the operational log to bring the stale copy up to date.
I think the lib could postpone the log replaying until the next publish. Chances are all readers will have the new epoch by than = the writer thread won't have to wait at all.
> I think the lib could postpone the log replaying until the next publish. Chances are all readers will have the new epoch by than = the writer thread won't have to wait at all.
I think you can't even do a new publish if all readers haven't finished switching to a new epoch after a previous publish. Otherwise, you risk corrupting readers that are still on the original epoch.
I think the difference is that with double buffering the stale data is thrown away but here they maintain a list of operations to repeat on the stale data to bring it into sync.
Oh neat, that’s what that’s called. I did something similar in ignorance. I needed a histogram metric with multiple lock-free writers and a single reader.
- create two ring buffers (left/right; I called them hot/cold)
- store the hot ring buffer in an atomic pointer
- single reader swaps hot and cold and waits for writers to finish
If you're ok with single writer as a constraint, hazard pointers get you similar read availability at the cost of maybe more expensive writes (and similar 2x memory overhead so long as the read sections are relatively short).
Hello, the author here. It feels great to see my blog on HN!
It was quite a journey, at first I thought I invented a novel concurrency schema. However, it turns out that it was simply a mix of my ignorance and hubris! :-)
Still, I had a lot of fun while designing this data structure and I believe it made a nice story. Ask me anything!
If the keys are strings and you can use Rust, have you considered fst [0]? If the keys have prefixes in common, it is very compact. There's a blog post about it [1]
The main limitation of it is that it doesn't support removal, but if removals are infrequent you can workaround that with another fst with removed items (and periodically rebuild the whole thing)
Yes the state machine data structure are also used in regex engines. The author of the fst crate also created Rust's regex crate and the ripgrep [0] CLI tool.
The regex crate has multiple implementations of regex matching algorithms, which is exposed as a library [1]. The implementations are selected at runtime based on which is faster and works right for a given regex. See also [2]
No, it isn't. The only similarity between them is finite state machines. Cox's article is about regexes. The fst crate is a data structure for storing keys and optional values. Two completely different things.
That's actually exactly what I did! I removed this part from the article as I felt it was not relevant for the concurrent protocol design and it was adding mental load for readers not that fluent in Java.
Why not just allocating the blobs off-heap? (That is something you probably want to do anyway if it's cryptographic material, to avoid being at the mercy of the GC leaving copies around)
ByteBuffer.allocateDirect should do that IIRC. This allows you to use the standard ConcurrentHashMap while being able to get a stable pointer for use by the rust logic.
I could use DirectByteBuffer instances as CHM values. But Java deallocates the backing memory of DirectByteBuffers during object finalization. If there is no on-heap memory pressure then there is no GC and thus no finalization. So it would leak offheap memory. I could also use Unsafe to hack into DirectByteBuffer and call the Cleaner explicitly. Many libraries do that anyway. But then I would still need some kind of reference counting to make sure I won't deallocate a buffer with active readers.
Or you could simply invoke a GC periodically (or every N times a key is removed from the map, or similar schemes).
Another simple way, if we don't like the idea of triggering GCs manually,is to allocate the same buffer both off-heap and on-heap: use the off-heap one for actual key storage, and the on-heap one just to generate heap memory pressure.
Is this the equivalent of directly asking the os for more pages, or does it work via some other heap-like mechanism that simply isn't garbage collected?
That's indeed a very good question! There are 2 sets of reasons:
1. Technical
The contract is given - I'm receiving usernames as CharSequence(String). I could change that, but that would likely require changes in the SQL parser - not simple at all. Alternatively, I could pass the whole CharSequence to Rust - but passing objects over the JNI boundary comes with perf. penalty (when compared to passing primitive) and we avoid that whenever possible. Or I could encode CharSequence content to a temporary offheap buffer and pass just the pointer to Rust. But this brings back some of the questions from the article - like who owns the buffer?
2. Other reasons
I realized this was a possibility only when I was nearly done with the design (this whole endeavor took less than one day) and I felt the urge to finish it. Also: This article wouldn't have been created!
cool article, I learned a lot, but I’m scratching my head wondering why they’re settling for Java on this project when Rust is better suited for it, seems like with the lifetimes, need for consistency and concurrency, memory leak issues, and generally the goal for high performance, this would be an awesome project for learning Rust
https://docs.rs/left-right/latest/left_right/