Deadlock problem

From: Tommi Korhonen <Tommi.Korhonen_at_iki.fi>
Date: 17 Mar 2002 17:30:42 GMT
Message-ID: <a72js2$jdt$1_at_oravannahka.helsinki.fi>



I have a client program executing database operations like

op1: newid := select max(id)+1 from mytable op2: insert into mytable values (newid,data)

where op1 and op2 are executed inside the same transaction. If two clients are running in the same time, they generate transactions T1 and T2. If the isolation level of the database is set to serializable, it's possible to schedule operations like

T1: start
T2: start
T1: op1
T2: op1
T1: op2
T1: commit
T2: op2
T2: commit

which will lead to deadlock, since

T1: op1 -> locks the table with shared-read lock to prevent phamtom rows
T2: op1 -> locks the table with shared-read lock to prevent phamtom rows
T1: op2 -> waits for T2 to unlock

(T1: commit -> never gets here, because T1 is still waiting)
T2: op2 -> waits for T1 to unlock --> DEADLOCK

(details taken from sybase, but the result was the same with all other
databases tried)

If the isolation level is lower than serializable, the deadlock is avoided, but both clients will get the same id-number (and that's not the purpose).

Trigger, auto-ingrement field or stored procedure are not the answer, because clients must work with every database, so proprietary features can not be used. Locking the whole table with exclusive-lock would solve the situation, but I think this also requires some proprietary actions.

How can I fix this? I would think this is no unique problem, op1+op2 must be a very basic operation in every client-server software. Of cource, DBMS can handle the deadlock by killing one of the transactions, but I would like to avoid the deadlock situation here altogether. Received on Sun Mar 17 2002 - 18:30:42 CET

Original text of this message