Re: computational model of transactions

From: Brian Selzer <brian_at_selzer-software.com>
Date: Wed, 02 Aug 2006 21:35:38 GMT
Message-ID: <KU8Ag.573$%j7.327_at_newssvr29.news.prodigy.net>


Since nobody else has mentioned it, I think I should raise a flag about one of your ideas. See below.
"Marshall" <marshall.spight_at_gmail.com> wrote in message news:1154452230.243215.174250_at_h48g2000cwc.googlegroups.com...
> I've been thinking about concurrency and transactions. It occurs to me
> that much of the difficulty lies with the fact that multiple concurrent
> transactions may operate on the same data. I begin to recall some
> conversations from years past about "multiple assignment".
>
> It seems to me that much or maybe all of the difficulty with multiple
> concurrent transactions operating on the same data would be
> eliminated if it wasn't possible for a transaction to read back
> the results of its own writes.
>
> In other words, consider the model in which transactions can only
> observe the database state that was current at the time the
> transaction started.
>

SQL Server 2005 provides this capability, it's called READ COMMITTED SNAPSHOT isolation. It accomplishes it by accessing the previous states of the rows that have been changed in the transaction log. The mechanism is similar to that used by the Volume Shadow Copy service in Windows 2003 and many of the open file backup options that have been available for years.

The benefit of this mechanism is that it provides the same level of protection for the issuing transaction as READ COMMITTED without applying any shared locks. Fewer locks means less blocking and fewer deadlocks. Fewer locks means better overall performance. But it's not a silver bullet. There are times when it is definitely contraindicated. The danger is that DBAs and developers will use it without fully understanding the ramifications of using it.

This should never be used in a posting routine because the information read out is already stale even before the read is completed, so a subsequent update based on that information would introduce garbage into the database. It is inevitable that overzealous DBAs and developers will use this to boost performance and end up with corrupt databases.

There are other arguments against using this in any procedure that issues updates, but it would take more time and space to give it the attention needed than is available or warranted in a single post.

> So, how much of a burden, at the *logical* level, would this be?
> Clearly it is not the same as with SQL transactions. Does it
> matter? Is there a use case I'm not thinking of that makes this
> problematic? I will admit there have been times where I've
> opened up an interactive SQL session, started a transaction,
> and typed a whole series of DML, and observed the results
> along the way, but I don't think I've ever written *code* that
> does anything like that.
>
> Your experiences and thoughts are appreciated.
>
>
> Marshall
>
Received on Wed Aug 02 2006 - 23:35:38 CEST

Original text of this message