Idea for concurrent transactions

From: hantheman <hantheman12_at_hotmail.com>
Date: 9 Mar 2004 13:44:20 -0800
Message-ID: <580fae16.0403091344.2b8353f4_at_posting.google.com>



Todays databases use locking or some optimistic approach. The first one is suspecticle to dead locks and heavy locking overhead in distributed scenarios. The second one leads to problems with hot spots and cascading aborts.

One approach I haven't seen discussed much is this:

  1. Each transaction read objects (or rows or entities - whatever you prefer) from the database, or gets the LAST current version when there's concurrent access to that object.
  2. Each write in any transaction leads to a new in-memory version of the object. This version is also logged in the transaction log.
  3. Each new, concurrent transaction requesting updated objects will use the LAST version.
  4. When all concurrent transaction are done (given some closure), the last version of each object is committed to the database.
  5. Isolation, if I have analyzed this correctly, is complete simply due to multiversioning.

The point is, this is a lock-free concurrency scheme where new transaction reads the latest version, and any update will lead to short-lived new versions. It appears to work out nice in distributed databases as well, with low message-passing overhead, although I haven't fleshed out all the details yet.

Any comments on this approach? AFAICT, this is different from multi-version databases, yet appears to be a useful approach. Or...?

Thank in advance. Received on Tue Mar 09 2004 - 22:44:20 CET

Original text of this message