this post was submitted on 13 Mar 2024
39 points (97.6% liked)

Rust

5989 readers
56 users here now

Welcome to the Rust community! This is a place to discuss about the Rust programming language.

Wormhole

!performance@programming.dev

Credits

  • The icon is a modified version of the official rust logo (changing the colors to a gradient and black background)

founded 1 year ago
MODERATORS
 

A saw this on Mastodon, and found it interesting. Rust already prevents a lot of race conditions, but deadlocks when using a mutex is still possible (and I have actually had one myself, though I caught it during testing). So, it would be nice if it would be possible to catch these cases at compile time. Now, seems to be just a proof of concept, but it is always nice to see the direction people are going and what areas are explored.

top 5 comments
sorted by: hot top controversial new old
[–] sugar_in_your_tea@sh.itjust.works 2 points 8 months ago* (last edited 8 months ago)

Every situation is different, but I've solved similar problems in the past with a work queue. So things like writes to a database are queued in a central worker task that has more knowledge about lock orders and can do bulk operations to reduce the time locks are needed, resulting in greater throughput at the risk of data loss if there's a crash. We did this mostly for write performance on an embedded product (we used SD cards with bursty writes).

So applying this to the more general problem of mutexed, I wonder if having a single thread manage locks would work. I'm thinking that single thread would store all locks as a bit field, and threads would interact with it like this:

  1. Thread calls Lock(locks_needed)
  2. Lock creates a future that unlocks when all of the requested resources are available, with two timeouts: wait and error
  3. The future checks the bit field if all locks are available; if the wait timeout is reached, the future is moved to a priority list, which pauses all other futures from locking the resource until this future resolves
  4. If the error timeout is reached, the future resolves with that error

This has quite a bit more overhead (perhaps it could be largely eliminated in the happy path case with atomic operations), but it should resolve the livelock issue since it degrades to a priority queue. It could work even better in an async context since you wouldn't need a separate async loop (have an AsyncLock entry point as well).

[–] Miaou@jlai.lu 1 points 8 months ago (1 children)

I'm a bit confused by the need for the thread key. This makes me think I don't understand what problem this is supposed to solve. Does this crate do something more than what c++'s scoped lock offers?

[–] blackjacksepp@feddit.de 3 points 8 months ago (2 children)

I never used C++'s scoped locks but as far as I can tell they perform runtime deadlock detection while this crate is compile-time only with near to none code produced in the resulting binary.

This is done by enforcing to either lock every Mutex the thread needs at once or none at all. Thread keys are used to represent this with the type system.

[–] naonintendois@programming.dev 5 points 8 months ago

I don't see how that can be used to make performant code though. I usually want to make my locked regions as small as possible to avoid how long other threads are blocked for.

[–] Miaou@jlai.lu 2 points 8 months ago* (last edited 8 months ago)

So this is solving the problem of reentrant deadlocks, with a non compile-time enforceable exclusion mechanism in the form of a threadkey.

It's quite different from solving deadlocks in general, and mostly moves the problem to getting the thread key. It's an interesting thought experiment, but unless I'm missing something else, it has no practical purpose over a try_lock.

But anyway, thanks for taking the time to explain, I should have read more carefully