Re: Deadlock problem

From: Ruud de Koter <ruud_dekoter_at_hp.com>
Date: Mon, 18 Mar 2002 10:23:03 +0100
Message-ID: <3C95B1F7.739E1B41_at_hp.com>


Hi Tommi,

Tommi Korhonen wrote:
>
> I have a client program executing database operations like
>
> op1: newid := select max(id)+1 from mytable
> op2: insert into mytable values (newid,data)
>
> where op1 and op2 are executed inside the same transaction. If two clients are
> running in the same time, they generate transactions T1 and T2. If the
> isolation level of the database is set to serializable, it's possible to
> schedule operations like
>
> T1: start
> T2: start
> T1: op1
> T2: op1
> T1: op2
> T1: commit
> T2: op2
> T2: commit
>
> which will lead to deadlock, since
>
> T1: op1 -> locks the table with shared-read lock to prevent phamtom rows
> T2: op1 -> locks the table with shared-read lock to prevent phamtom rows
> T1: op2 -> waits for T2 to unlock
> (T1: commit -> never gets here, because T1 is still waiting)
> T2: op2 -> waits for T1 to unlock --> DEADLOCK
>
> (details taken from sybase, but the result was the same with all other
> databases tried)
>
> If the isolation level is lower than serializable, the deadlock is avoided, but
> both clients will get the same id-number (and that's not the purpose).
>
> Trigger, auto-ingrement field or stored procedure are not the answer, because
> clients must work with every database, so proprietary features can not be used.
> Locking the whole table with exclusive-lock would solve the situation, but I
> think this also requires some proprietary actions.
>
> How can I fix this? I would think this is no unique problem, op1+op2 must be a
> very basic operation in every client-server software. Of cource, DBMS can
> handle the deadlock by killing one of the transactions, but I would like to
> avoid the deadlock situation here altogether.

One solution would be to construct an additional table, which stores the max(id). You can then query this table, and update it with a new id value for each operation. If you need to get the max(id) on more than one table, simply add the table name to the new table's definition and query on the table name.

This is probably faster also, as you don't need to determine the max(id) for every action you perform.

Locking problems on the new table are easier to prevent: - lock rows exclusively, not the complete table (in Oracle I would use 'FOR UPDATE OF'),
- make sure reading and updating the row is (part of) one transaction.

As one transaction performs all actions before releasing the exclusive lock, no risk exists that other transactions will start in the meantime. On catch: you should keep the transaction as short as possible!

Hope this gets you under way,

Ruud.


Ruud de Koter                    HP OpenView Software Business Unit
Senior Software Engineer         IT Service Management Operation
Telephone: +31 (20) 514 15 89    Van Diemenstraat 200  
Telefax  : +31 (20) 514 15 90    PO Box 831
Telnet   : 547 - 1589            1000 AV  Amsterdam, the Netherlands
Email : ruud_dekoter_at_hp.com
internet: http://www.openview.hp.com/itsm
          http://www.openview.hp.com/assetview
intranet: http://ovweb.bbn.hp.com/itservicemanager
--------------------------------------------------------------------------------------
Received on Mon Mar 18 2002 - 10:23:03 CET

Original text of this message