Re: computational model of transactions

From: J M Davitt <jdavitt_at_aeneas.net>
Date: Wed, 02 Aug 2006 03:17:43 GMT
Message-ID: <rPUzg.61711$Eh1.11055_at_tornado.ohiordc.rr.com>


Marshall wrote:
> Bob Badour wrote:
>

>>Instead of committing after every transaction the batches committed
>>after every n transactions. Now, I realize that makes them all part of
>>the same transaction in terms of begin transaction/commit transaction,
>>but in terms of serializability, this introduces no risk.

>
>
> Whoops! You just made me think of something I hadn't thought of
> earlier: composability and nested transactions. If we have "small"
> transactions t1 and t2, and transaction T which "contains" t1 and
> t2, then we need to ensure that executing t1, t2 standalone
> produces the same results as executing t1 and t2 inside T. Which
> means that my idea of "transaction can only see the state at
> transaction-start-time" needs to be the exact current transaction,
> not the nesting transaction. Which I'll have to think about.

I don't think I agree. Big transaction T contains serial transactions t1 and t2, right? t1 and t2 see the database as it is at the instant they start. They may each commit but, if T is rolled back, the small commits are rolled back, too, and the database is restored to the value it had at the instant T began.

t1 and t2 "standalone" is a very different condition than t1 and t2 within T; I don't agree that the results should be the same when T commits. Certainly, wrt to each of the work units, their effects should be deterministic -- and if that's what you mean, I agree.

> The merging-multiple-transactions objection is interesting but
> doesn't kill the idea (at least not yet,) because it's a performance
> optimization based on a specific implementation, which is in turn
> based on the standard model. It would have to be established
> that the same performance effect applied in the new model, and
> I'm not sure it does.

I'm not sure what you mean by "merging." Do you mean that the system must detect whether transactions (obviously starting and ending at different times) may change data read or written by others and then raise some sort of exception? Different implementations take different approaches. Some generate a flurry of exclusive (write) locks and shared (read) locks at a variety of levels and serialize everything for some (lengthy) interval, others simply detect the condition and decide to pick one (or more) transaction for failure. The different techniques have drastically different effects on performance, whether measured by throughput or consistency. Which one is best? "It depends."

>
> Marshall
>
Received on Wed Aug 02 2006 - 05:17:43 CEST

Original text of this message