Re: computational model of transactions

From: Bob Badour <bbadour_at_pei.sympatico.ca>
Date: Tue, 01 Aug 2006 23:14:00 GMT
Message-ID: <YeRzg.31906$pu3.426186_at_ursa-nb00s0.nbnet.nb.ca>


Bob Badour wrote:

> paul c wrote:
>

>> Bob Badour wrote:
>>
>>> paul c wrote:
>>>
>>>> Bob Badour wrote:
>>>>
>>>>> paul c wrote:
>>>>> ...
>>>>> What if one combines multiple logical units of work into a single 
>>>>> transaction? I have seen this done for performance in batch 
>>>>> processes to faciliate effective physical buffering etc. With 
>>>>> Marshall's proposal, this would not be possble.
>>>>
>>>>
>>>>
>>>> So have I, and the batch process was usually serialized in one way 
>>>> or another, either by suspending certain other transactions or even 
>>>> by kicking all users off the system.
>>>
>>>
>>>
>>> While that's sometimes necessary, the batch processes I referred to 
>>> did not all do that. They just grouped multiple logical units of work 
>>> together before issuing a commit. Serializing was handled by the 
>>> normal concurrency features and isolation level.
>>>
>>> Thus, the batch might issue 10 commits for 1000 logical units of work 
>>> by only committing after every 100th one. For larger logical units of 
>>> work, the batch might issue 100 commits by committing after every 
>>> 10th one.
>>>
>>> There is a performance tradeoff between how much of the log is used 
>>> for uncommitted transactions vs. how efficiently the batch uses the 
>>> network resources. Plus, one has to consider that a rollback will 
>>> revert multiple logical units of work.
>>
>>
>> Like give all 100,000 employees a 10% raise.  Still, that kind of 
>> commit is not what I call a logical commit, suggesting that a commit 
>> doesn't mark a luw boundary.  I've heard it called an 'intermediate', 
>> aka physical, commit.

>
>
> There is nothing intermediate about it. I think you are talking about
> splitting logical unit of work into multiple units of work, which is
> something entirely different. Your comment suggests to me you have
> worked with Oracle and it's, ahem, wasteful and unforgiving use of
> rollback segments.
>
> Instead of committing after every transaction the batches committed
> after every n transactions. Now, I realize that makes them all part of
> the same transaction in terms of begin transaction/commit transaction,
> but in terms of serializability, this introduces no risk.
>
> The batches were more along the lines of: I have a file from a different
> source that describes all the changes to the employment records of all
> the employees for the last period, and now I have to apply those changes
> to my database. An individual logical unit of work might update two or
> three tables, and the update would have to be atomic. Instead of
> committing after each logical unit of work, the batch program would
> process a bunch of them before issuing a commit.

Oh, and because the file might have multiple updates to the same employment record, a later read would have to see the results of an earlier update. Received on Wed Aug 02 2006 - 01:14:00 CEST

Original text of this message