Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Is Oracle deliberately difficult?

Re: Is Oracle deliberately difficult?

From: Craig Munday <craigm_at_access.com.au>
Date: Thu, 7 Sep 2000 09:27:46 +1000
Message-ID: <syzt5.17$yQ1.989@nsw.nnrp.telstra.net>

In my opinion there is no hard and fast rule about consistency, it really depends on what degree of interference an application can tolerate from other transactions. In some instances, reading inconsistent data might well be acceptable, in other instances it is will not be.

The trade off is between isolation and performance. In general the higher the isolation required, the higher the performance penalty.

Cheers,
Craig.

"Bob Cunningham" <bcunn_at_cucomconsultants.com> wrote in message news:39b6b43f.2129381_at_news.telusplanet.net...
> On Tue, 05 Sep 2000 15:54:43 GMT, jxs_at_wolpoff_nospam_law.com (Jay M.
> Scheiner) wrote:
>
> >I'm not talking about reading the record while it is being changed.
> >I'm talking about
> >1) you start processing 10 million records, looking for X to be true.
> >2) someone else changes record # 9 million, from X is true to X is
> >false. They commit the transaction. The record is updated, done, not
> >in a state of change, etc.
> >3) Your process finally gets to record # 9 million. What should you
> >see? The obsolete, incorrect X is true, or the new, correct, X is
> >false?
> >
> >Just becuase X USED TO BE false doesn't mean that your program handles
> >it that way, just because or when you started. When it looks at the
> >record, it should see the CORRECT, CURRENT value. If anyone can
> >convince me that this is logically wrong, then I will be happy to
> >admit I am wrong, just like...
> >
> >the block size thing.
> >
> >
>
> What would you feel of your proposal if that foreign process of the
> other user actually changed more than one row within the same
> transaction. Say, the first row changed was the 4th row your process
> encountered and the other change wasn't encountered by your process
> until the 9 millionth row. But, when you looked at the 4th row that
> other user hadn't even started their process yet, but by the time you
> got to the 9 millionth row it had all been done, updated and committed
> in the same fashion as you're talking about.
>
> Now your process has potentionally operated on an inconsistent view of
> data since it saw the 4th row before its change and the 9 millionth
> row after the change. Those rows were changed together under the same
> transaction for a reason...there may be some application level
> integrity relationship between them...or maybe the other programmer
> just wanted to periodically commit a large number of unrelated changes
> together...how would we know?
>
> I don't think one should attempt to operate against related
> information collected from a database in a different consistent state.
> A consistent state exists only for a single point in time and will
> evolve to a different state of consistency the very next instant.
>
> The only question for me is what point in time that consistency is
> important to the nature my transaction: at the statement level (i.e.
> the point in time the statement commenced execution) or the
> transaction level (i.e. the point in time that the first query/DML
> statement commenced execution under the new transaction).
>
> But never would I agree to having the consistency level determined by
> the point in time that Oracle's physical data retrieval strategy
> eventually got around to locating a candidate row for my
> query...particularly when that strategy was chosen by the optimizer
> over which all of us are confident of having absolute, unerring
> control ;)
>
>
Received on Wed Sep 06 2000 - 18:27:46 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US