Re: OO and relation "impedance mismatch"

From: Kenneth Downs <firstinit.lastname_at_lastnameplusfam.net>
Date: Wed, 06 Oct 2004 16:24:23 -0400
Message-ID: <o9k1kc.dob.ln_at_mercury.downsfam.net>


Tony Andrews wrote:

> Kenneth Downs wrote:

>> Hmmm, good question.  At a first glance, the unique constraint should

> do a
>> dirty read and the RI should not.  Uniques need to err on the side of
>> catching all possible violations, hence the dirty read.  RI needs to

> err on
>> the side of not giving an OK on a value that may not yet be

> committed, so
>> you do the opposite.
>>
>> If the constraints are coded this way, user b's RI will fail, as it

> should.
>>
>> Thanks for raising the point.

>
> That's OK. And I'll take it a bit further with 2 more examples.
>
> Example 1:
> -- Assume a parent row with pid=123 exists already, but no related
> child rows exist
> User A> insert into child (pid, cid) values (123, 456);
> -- Succeeds
> User B> delete from parent where pid = 123;
> -- Dirty read sees User A's insert and so fails the delete
>
> Example 2:
> -- Assume a parent row with pid=123 exists already, AND that related
> child rows exist
>
> User A> delete from child where pid = 123;
> -- Succeeds
> User B> delete from parent where pid = 123;
> -- Dirty read sees User A's delete and so SUCCEEDS
> User A> rollback;
> -- Oh dear, now we have orphans again!
>
> So here the very same DELETE trigger needs to do a dirty read in one
> case and a clean read in another. Perhaps the answer is to do both
> type of read in the trigger, and fail if either finds a record?
>

OK, third of three replies. Your argument cuts to the heart of the issue. Simple cases are easy, but overlapping transactions with the possibility of rollback will create problems. They might be solved by pursuing ever more detailed cases of dirty/non-dirty, but on that road surely lies madness. Your objection to dirty reads is accepted with no further argument.

So is that the end of it? As I was running through the cases I realized that we are both ignoring locks. What happens when we take them into account? In SQL Server, the product that I know best, if I make no explicit vendor-specific statement about either action, then in all cases the second action will hold because in all cases the first action puts an exclusive write lock onto the rows. In other words, I don't need to worry about dirty reads because SQL Server's locking mechanism will magically make everything work anyway, which I had forgotten in earlier posts. Though of course such a claim bears careful case-by-case testing no matter how well I think I know the product.

This leaves timeouts and the resulting update conflict errors to be the only remaining issue, which are resolved by the always recommended practice of keeping transactions very small.

I really had no intention of going into this so deep at this point, it was vaguely on my schedule for a couple of weeks from now, but now that you've brought it up I have at least avoided the trap of messing around with dirty reads and know that I have to determine if locks behave as expected.

-- 
Kenneth Downs
Use first initial plus last name at last name plus literal "fam.net" to
email me
Received on Wed Oct 06 2004 - 22:24:23 CEST

Original text of this message