Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Single-statement 'write consistency' on read committed. Oh, really?

Re: Single-statement 'write consistency' on read committed. Oh, really?

From: Tengiz Kharatishvili <tengizk_at_Hotmail.com>
Date: 1 Dec 2003 01:12:59 -0800
Message-ID: <3fcc40e7.0312010112.4ffa7d46@posting.google.com>


"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote in message news:<bqdb6b$3dl$1$8300dec7_at_news.demon.co.uk>...
> The issue is known, and has been discussed
> (for example on Tom Kyte's website).

Yes, there is a mentioning of the fact on the website, as well as a couple vague paragraphs in his book. However, I don't find either satisfactory.

> Unfortunately it is part of the issue - although your example
> has produced an unexpected result, it has done so because
> of an appalling design which is inconsistent with the 'careful
> developer' that you cite at the end of your note. In particular,
> the Concepts manual warns specifically that DML that uses
> subqueries should use serializable transactions - but your design
> has denormalised the data so that you are doing an update which
> should have been from a subquery, but using summary data that
> has been written to the table, thus bypassing Oracle's warning.
>

Yes, I agree that a normalized design would formally fall into the category mentioned in the docs (an update with a subquery) and formally there would be no issue at all. But the problem is that the transaction isolation concepts have nothing to do with the relational theory. And bad application design doesn't necessarily mean that it's automatically incorrect.

If there is a committed database state (A) being transformed into state (B) and there also is a concurrent transaction (T) that after the state (B) is committed into the database finally leaves behind state (C) that could not be possibly produced from either (A) or (B) - then I have an isolation problem, without regard to whether good or bad the app design is.

In my example, the following is the state (A):

            ACC     AMOUNT    PENDING CLEARED
     ---------- ---------- ---------- --------- 
              1         10         -2
              2          0          2
              3          0          0 26-NOV-03

State (B):

            ACC     AMOUNT    PENDING CLEARED
     ---------- ---------- ---------- --------- 
              1         10         -3
              2          0          2
              3          0          1

Transaction (T):

     update accounts
     set amount = amount + pending, pending = 0, cleared = sysdate
     where cleared is null;

State (C):

            ACC     AMOUNT    PENDING CLEARED
     ---------- ---------- ---------- --------- 
              1          7          0 26-NOV-03
              2          2          0 26-NOV-03
              3          0          1

Forget the creepy business case behind the example. The state (C) should not be produced from either (A) or (B) by (T) in a serial execution.

> I think I could find a couple of major vendors who would
> also get your correct result. I think they both might lock the
> entire table to do so - against reads and writes.
>

I disagree – any reasonable implemented locking scheduler with row-level locking would produce the correct results with no table-level locks. I do know it for sure.

> I think your descriptions can be simplified to:
> Oracle restarts only if it comes to a row where the
> predicate columns have moved the row from
> being 'in-scope' to being 'out of scope'. (So no-change
> to predicates, and a change that moves a row from
> 'out of scope' to 'in-scope' is ignored.
>

Yes, you're right.

> Surely you don't expect a serious answer to these (very sensible)
> questions.
>

Right again, I wouldn't expect them to document these things, however, what I do expect is them to honor the fundamental promise they make. The statement-level write consistency should either be guaranteed or not. You cannot be possibly happy with this 'it kind of works' behaviour with no clear explanation when exactly it's supposed to work.

> My opinion, still subject to fiddling around a bit more, is that
> this is an example of optimistic locking gone wrong. Clearly
> Oracle could do:
> 'select for update' / 'update'
> as an internal response to an update command. This would
> avoid the issue, but at a cost which would be unnecessary in
> an extremely high proportion of cases. So Oracle has gone
> for the approach:
> update
> if we find a problem
> rollback
> select for update
> update
> end if;

Agree, that's basically what I've deduced from my experiments too. Except, it doesn't have to be the 'select for update' before the 'update' that retries - in case you're willing to tolerate multiple restarts.

> However, my feeling is that the code that does the 'select
> for update' bit has a cunning optimisation trick built in that
> goes for the wrong SCN.
>

Precisely - I believe that it's either a documentation bug (they should not claim that Oracle provides a 'single-statement write consistency' without specifying the exact circumstances when it really does) or a defect in the write conflict handling code - a namely, the 'optimization' is too aggressive. Moreover, I believe they're very well aware of the problem, but they won't fix it because of some non-technical reasons.

And finally - I must confess: I recently tried to post the original detailed question on Tom's website, but he essentially refused to answer and did not allow the question and his reply for public view.

So the whole thread was a bit of a provocation. I hope it was educational for some of us though. Received on Mon Dec 01 2003 - 03:12:59 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US