Re: counting rows
Date: Sun, 2 Apr 2006 08:01:44 +0000 (UTC)
Message-ID: <slrne2v17b.qol.lefty_at_goedel.home.rtfm.gr>
On 2006-04-02, Tony Rogerson <tonyrogerson_at_sqlserverfaq.com> wrote:
>> No locking is only an issue on /some/ databases.
>> e.g. in read commited isolation level (the default) with two transactions
>> both inserting into the same table neither Oracle nor PostgreSQL lock
>> the table. Both give the correct reply for count(*).
>
> At the start of the transaction they do (sql 2005 has versioning now also
> and read committed without blockers), but they don't take into account
> inserts and updates do they?
>
Well yes they do, as long as two transactions don't try to change the same row of course. The correct count(*) was given /after/ inserts by the two long running transactions.
However I see where you come from since doing the same test on MS SQL Server 2000 (MSDE v8.00.760), this happened
s1: insert -> OK s1: select count(*) -> OK s2: insert -> OK s2: select count(*) -> froze (waiting for lock) s1: select * -> ERROR: Transaction (Process ID 68) was deadlocked on lock \resources with another process and has been chosen as the deadlock victim. \ Rerun the transaction.
s2: continues returning the count
> But as I said, you'd need to serialise in order to make sure you didn't have
> any phantom inserts/deletes.... (and that is regardless of using a trigger
> or not)
No you need at least read commited isolation level, however arguably isolation level serialized is more correct.
The problem is that if you have a trigger to maintain the count then the row that the trigger updates is also the /same/ row that the other transactions update. In such a case locking is unavoidable.
> Thats too basic an experiment, what happens when inserts are occuring at the
> same time the COUNT(*) is taking place - you need to serialise so that
> doesn't happen.
>
But the inserts are happening while the other transactions are active. I was testing long running transactions (autocommit off). When the lock will be released then has more to do with when the transaction will complete (which is not necessarily IO bound - think slow network link & extended transactions) than with how much IO the database server will do.
> And when you serialise it becomes an IO bottleneck, everything waits until
> that serialised transaction completes.
>
Lefty Kreouzis Received on Sun Apr 02 2006 - 10:01:44 CEST