From 12M ops/s to 305 M ops/s on a lock-free ring buffer.
In this post, I walk you step by step through implementing a single-producer single-consumer queue from scratch.
This pattern is widely used to share data between threads in the lowest-latency environments.
Your blog footer mentions that code samples are GPL unless otherwise noted. You don't seem to note otherwise in the article, so -- do you consider these snippets GPL licensed?
Actually I'm not sure. GPL was for source code of the website itself
I guess the code samples inside post are under https://david.alvarezrosa.com/LICENSE
But feel free to ping me if you need different license, quite open about it
It would be nice to have an example use case where the technique would show a benefit.
It seems relatively rare to have a single producer and consumer thread, and be worth polling a ring buffer.
Something to add to this; if you're focussing on these low-level optimizations, make sure the device this code runs on is actually tuned.
A lot of people focus on the code and then assume the device in question is only there to run it. There's so much you can tweak. I don't always measure it, but last time I saw at least a 20% improvement in Network throughput just by tweaking a few things on the machine.
Agreed. For benchmarking I used this <https://github.com/david-alvarez-rosa/CppPlayground/blob/mai...> which relies on GoogleBenchmark and pins producer/consumer threads to dedicated CPU cores
What else could be improved? Would like to learn :)
Maybe using huge pages?
kernel tickrate is a pretty big one, most people don't bother and use what their OS ships with.
Disabling c-states, pinning network interfaces to dedicated cores (and isolating your application from those cores) and `SCHED_FIFO` (chrt -f 99 <prog>) helps a lot.
Transparent hugepages increase latency without you being aware of when it happens, I usually disable that.
Idk, there's a bunch but they all depend on your use-case. For example I always disable hyperthreading because I care more about latency than processing power- and I don't want to steal cache from my workload randomly.. but some people have more I/O bound workloads and hyperthreading is just and strict improvement in those situations.
Thanks. Do you happen to know why hyperthreading should be disabled?
In prod most trading companies do disable it, not sure about generic benchmarks best practices
It eliminates cache contention between siblings, which leads to increased latency (randomly)
There are some microarchitectural resources that are either statically divided between running threads, or "cooperatively" fought over, and if you don't need to hide cache miss latency, which is the only thing hyperthreading is really good at, you're probably better off disabling the supernumerary threads.
Random idea: If you have a known sentinel value for empty could you avoid the reader needing to read the writer's index? Just try to read, if it is empty the queue is empty, otherwise take the item and put an empty value there. Similarly for writing you can check the value, if it isn't empty the queue is full.
It seems that in this case as you get contention the faster end will slow down (as it is consuming what the other end just read) and this will naturally create a small buffer and run at good speeds.
The hard part is probably that sentinel and ensuring that it can be set/cleared atomically. On Rust you can do `Option<T>` to get a sentinel for any type (and it very often doesn't take any space) but I don't think there is an API to atomically set/clear that flag. (Technically I think this is always possible because the sentinel that Option picks will always be small even if the T is very large, but I don't think there is an API for this.)
Yeah, or you could put a generation number in each slot adjacent to T and a read will only be valid if the slot's generation number == the last one observed + 1, for example. But ultimately the reader and writer still need to coordinate here, so we're just shifting the coordination cache line from the writer's index to the slot.
I think the key difference is that they only need to coordinate when the reader and writer are close together. If that slows one end down they naturally spread apart. So you don't lose throughput, only a little latency in the contested case.
> I think the key difference is that they only need to coordinate when the reader and writer are close together.
This was already the case with the cached index design at the end of the article, though. (Which doesn't require extra space or extra atomic stores.)
That's a good point. They are very similar. I guess the sentinel design in theory doesn't need to synchronize at all as long as there is a decent buffer between them. But the cached design synchronizes less commonly the more space there is which sounds like it would be very similar in practice. The sentinel design might also have a few thrashing issues when the reader and writer are on the same page which would probably be a bit less of an issue with the cached index design.
Great article, thanks for sharing. And such a lovely website too :)
Thanks for the feedback <3
Great post!
Would you mind expanding on the correctness guarantees enforced by the atomic semantics used? Are they ensuring two threads can't push to the same slot nor pop the same value from the ring? These type of atomic coordination usually comes from CAS or atomic increment calls, which I'm not seeing, thus I'm interested in hearing your take on it.
I see you replied on comment below with:
> note that there are only one consumer and one producer
That clarify things as you don't need multi-thread coordination on reads or writes if assuming single producer and single consumer.
Exactly, that's right
Thanks! That's not ensured, optimizations are only valid due to the constraints
- One single producer thread
- One single consumer thread
- Fixed buffer capacity
So to answer
> Are they ensuring two threads can't push to the same slot nor pop the same value from the ring?
No need for this usecase :)
This is a SPSC queue -- there aren't multiple writers to coordinate, nor readers. It simplifies the design.
I had what I thought was a pretty good implementation, but I wasn't aware of the cache line bouncing. Looks like I've got some updates to make.
Glad that it helps :)
Is there a C library that I can get these data structures for free?
Random q: What was the first cpu to support atomic instructions?
I don't know but the IBM 360 and the DEC PDP-10 both had them. Those are the earliest systems I ever saw.
Super fun, def gonna try this on my own time later
Feel free to share your findings
It's lock-free because it uses ordered loads and stores, which is also how you implement locks. I find the semantic distinction unconvincing. The post is really about how slow the default STL mutex implementation is.
This is in C++, other languages have different atomic primitives.
Don't most people use C++11 atomics now? You have SeqCst, Release, Acquire, and Relaxed (with Consume deprecated due to the difficulty of implementing it). You can do loads, stores, and exchanges with each ordering type. Zig, Rust, and C all use the same orderings. I guess Java has its own memory model since it's been around a lot longer, but most people have standardized around C++'s design.
Which is a slight shame since Load-Linked/Store-Conditional is pretty cool, but I guess that's limited to ARM anyways, and now they've added extensions for CAS due to speed.
I've taken an interest in lock-free queues for ultra-low power embedded... think Cortex-m0, or even avr/pic.
Things get interesting when you're working with a cpu that lacks the ldrex/strem assembly instructions that makes this all work. I think youre only options at that point are disable/enable interrupts. IF anyone has any insights into this constraint I'd love to hear it.
For ultra low-power embedded, wouldn't a mutex approach work just fine? You're running on a single core anyway.
I'm not sure about the single-core scenario, but would love to learn if someone else wants to add something
In reality multiple threads for single core doesn't make much sense right?
LL/SC is still hinted at in the C++11 model with std::atomic<T>::compare_exchange_weak:
https://en.cppreference.com/w/cpp/atomic/atomic/compare_exch...
Really? Pretty much all atomics i’ve used have load, store of various integer sizes. I wrote a ring buffer in Go that’s very similar to the final design here using similar atomics.
Nice one, thanks for sharing. Do you wanna share the ring buffer code itself?
They generally map directly to concepts in the CPU architecture. On many architectures, load/store instructions are already guaranteed to be atomic as long as the address is properly aligned, so atomic load/store is just a load/store. Non-relaxed ordering may emit a variant load/store instruction or a separate barrier instruction. Compare-exchange will usually emit a compare and swap, or load-linked/store-conditional sequence. Things like atomic add/subtract often map to single instructions, or might be implemented as a compare-exchange in a loop.
The exact syntax and naming will of course differ, but any language that exposes low-level atomics at all is going to provide a pretty similar set of operations.
yeah that’s why i was surprised by grandparent saying the atomics were c++ specific
100% agree +1
JVM has almost the same (C++ memory model was modeled after JVM one, with some subtle fixes).
Yeah, this is quite specific to C++ (at a syntax level)
Huh? Other languages that compile to machine code and offer control over struct layout and access to the machine’s atomic will work the same way.
Sure, C++ has a particular way of describing atomics in a cross-platform way, but the actual hardware operations are not specific to the language.
Yeah, different languages will have different syntaxes and ways of using atomics
But at the hardware level all are kindof the same
It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Also doesn't have fences on the store, has extra branches that shouldn't be there, and is written in really stylistically weird c++.
Maybe an llm that likes a different language more, copying a broken implementation off github? Mostly commenting because the initial replies are "best" and "lol", though I sympathise with one of those.
> It's obviously, trivially broken. Stores the index before storing the value, so the other thread reads nonsense whenever the race goes against it.
Are we reading the same code? The stores are clearly after value accesses.
> Also doesn't have fences on the store
?? It uses acquire/release semantics seemingly correctly. Explicit fences are not required.
Push:
buffer_[head] = value;
head_.store(next_head, std::memory_order_release);
return true;
There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
That's backwards: in C++, a release store to head_ and an acquire load of that same atomic do order the prior buffer_ write, even though the data and index live in different locations, so the consumer that sees the new head can't legally see an older value for that slot unless something else is racing on it seperately. If this is broken, the bug is elsewhere.
> There's no relationship between the two written variables. Stores to the two are independent and can be reordered. The aq/rel applies to the index, not to the unrelated non-atomic buffer located near the index.
No, this is incorrect. If you think there's no relationship, you don't understand "release" semantics.
https://en.cppreference.com/w/cpp/atomic/memory_order.html
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store.
This is just wrong. See https://en.cppreference.com/w/cpp/atomic/memory_order.html. Emphasis mine:
> A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (see Release-Consume ordering below).
write with release semantic cannot be reordered with any other writes, dependent or not.
Relaxed atomic writes can be reordered in any way.
> write with release semantic cannot be reordered with any other writes, dependent or not.
To quibble a little bit: later program-order writes CAN be reordered before release writes. But earlier program-order writes may not be reordered after release writes.
> Relaxed atomic writes can be reordered in any way.
To quibble a little bit: they can't be reordered with other operations on the same variable.
Yep, you are right, more precise, and precision is very important in this topic.
I stand corrected.
Sorry, but that's not actually true. There are no data races, the atomics prevent that (note that there are only one consumer and one producer)
Regarding the style, it follows the "almost always auto" idea from Herb Sutter
If you enforce that the buffer size is a power of 2 you just use a mask to do the
if (next_head == buffer.size())
next_head = 0;
partIf it's a power of two, you don't need the branch at all. Let the unsigned index wrap.
You ultimately need a mask to access the correct slot in the ring. But it's true that you can leave unmasked values in your reader/writer indices.
Interesting, I've never heard about anybody using this. Maybe a bit unreadable? But yeah, should work :)
See https://fgiesen.wordpress.com/2012/07/21/the-magic-ring-buff... which takes it even further :)
Nice one!
Indeed that's true. That extra constraint enables further optimization
It's mentioned in the post, but worth reiterating!
This was, in fact, mentioned in the article.