Hacker News new | past | comments | ask | show | jobs | submit login

It's an interesting point - "lock free" then is actually more about consistently good performance than theoretically maximising performance. If you never have to wait for a lock then you know exactly how long your process will take (as long as it isn't CPU starved). As soon as there's a lock its going to become unpredictable, because when there's a lot of contention you'll wait a long time, when there isn't you won't have to wait. And best of all, with lock free you shouldn't be getting deadlocks ever.

This property in turn could get you better performance in the end because you may not have to size everything for such a big range of contention, so you can optimize utilization of the hardware more without risking things hitting a peak and experiencing severe degradation.




> If you never have to wait for a lock then you know exactly how long your process will take

I think you may be confusing lock-free with wait-free here. I believe it's wait-free that gives you a bound on the number of steps the process can take to complete an operation, but I'm not an expert.


I don't know how authoritative it is but I was taking the definitions from here:

http://www.1024cores.net/home/lock-free-algorithms/introduct...

They define wait free as not even having thread starvation and lock free as not having to wait for anything as long as you have CPU to actually run.

Its always interesting how any given topic always fragments into much more complexity than you expected once you start learning about it!


Doesn't that support it?

> Wait-freedom means that ... Each operations is executed in a bounded number of steps.


Yes, wait-free is a more precise description of single-writer ring buffer performance.

Lock-free queues are not fixed-time, because operations on them involve negotiations on the bus to reclaim the cache line involved, often behind other queued operations.

With the ring buffer, the writer never gives up ownership of the head cache line, and the next one can be pre-fetched (speculatively) because the writes are walking memory sequentially. There is a little hiccup when you wrap around, so it's not absolutely fixed-time.

I put my ring buffers on hugepages so I am not competing for page-map TLB cache.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: