Re: counting rows

From: Tony Rogerson <tonyrogerson_at_sqlserverfaq.com>
Date: Sun, 2 Apr 2006 08:08:05 +0100
Message-ID: <e0nt8e$o78$1$8300dec7_at_news.demon.co.uk>


> No locking is only an issue on /some/ databases.
> e.g. in read commited isolation level (the default) with two transactions
> both inserting into the same table neither Oracle nor PostgreSQL lock
> the table. Both give the correct reply for count(*).

At the start of the transaction they do (sql 2005 has versioning now also and read committed without blockers), but they don't take into account inserts and updates do they?

But as I said, you'd need to serialise in order to make sure you didn't have any phantom inserts/deletes.... (and that is regardless of using a trigger or not)

> As I said before, in PostgreSQL in transaction isolation level
> serializable you
> you don't have contention. I just tried the following experiment:

Thats too basic an experiment, what happens when inserts are occuring at the same time the COUNT(*) is taking place - you need to serialise so that doesn't happen.

And when you serialise it becomes an IO bottleneck, everything waits until that serialised transaction completes.

-- 
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials


"E. Lefty Kreouzis" <lefty_at_goedel.home.rtfm.gr> wrote in message 
news:slrne2tsrs.5n3.lefty_at_goedel.home.rtfm.gr...

> On 2006-04-01, Tony Rogerson <tonyrogerson_at_sqlserverfaq.com> wrote:
>> I'll answer that question with another question - do you want it
>> accurate?
>> In which case you'll need to serialise anyway.
>>
>
> You clearly want it correct, otherwise you don't want to evaluate count(*)
> you want to evaluate something like 'estimate rows' which is a different
> cettle of fish alltogether.
>
>> Locking will be there anyway because of the inserting of the picture row
>> data.
>>
>
> No locking is only an issue on /some/ databases.
> e.g. in read commited isolation level (the default) with two transactions
> both inserting into the same table neither Oracle nor PostgreSQL lock
> the table. Both give the correct reply for count(*).
>
>> The update of count is simply modifying 4 bytes on a single row which
>> would
>> be accessed via a unique index (very little IO), do you really think the
>> contention will be with the update?
>>
>
> The contention will be because locking will be mandatory in the trigger
> since
> in order for the count to be correct the transactions will *have* to be
> serialised.
> IO has *nothing* to do with it. If you have to serialize you take a
> performance
> hit on concurrent transactions, no two ways about it.
>
>> By doing COUNT(*) and serialising the query will cause considerable
>> contention because the COUNT(*) query will be a lot longer.
>>
>
> As I said before, in PostgreSQL in transaction isolation level
> serializable you
> you don't have contention. I just tried the following experiment:
>
> open two sessions s1 & s2
>
> s1: create table s (id int4 primary key);
> s1: set session characteristics as transaction isolation level
> serializable;
> s2: set session characteristics as transaction isolation level
> serializable;
> s1: begin work;
> s2: begin work;
> s1: insert into s values(1);
> s1: select * from s;
> id
> ----
> 1
> (1 row)
> s1: select count(*) from s;
> count
> -------
> 1
> (1 row)
> s2: select * from s;
> id
> ----
> (0 rows)
> s2: select count(*) from s;
> count
> -------
> 0
> (1 row)
> s2: insert into s values(2);
> s2: select * from s;
> id
> ----
> 2
> (1 row)
> s2: select count(*) from s;
> count
> -------
> 1
> (1 row)
> s1: select * from s;
> id
> ----
> 1
> (1 row)
> s1: insert into s values(3);
> s1: select * from s;
> id
> ----
> 1
> 3
> (2 rows)
> s1: select count(*) from s;
> count
> -------
> 2
> (1 row)
> s2: commit;
> s1: select * from s;
> id
> ----
> 1
> 3
> (2 rows)
> s1: select count(*) from s;
> count
> -------
> 2
> (1 row)
> s1: commit;
> s1: select * from s;
> id
> ----
> 1
> 2
> 3
> (3 rows)
> s1: select count(*) from s;
> count
> -------
> 3
> (1 row)
>
> No locks, no contention (mind you I was careful not to insert using the
> same
> primary key - if I had done so then I would have had the following:
>
> s1: begin work;
> s2: begin work;
> s1: insert into s values(4);
> INSERT 0 1
> s2: insert into s values(4);
> ------- Waiting for lock
> s1: commit;
> s2: prints ERROR: duplicate key violates unique constraint "s_pkey"
>
> Lefty Kreouzis
>
>
>
>
Received on Sun Apr 02 2006 - 09:08:05 CEST

Original text of this message