Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Article about supposed "murky" future for Oracle

Re: Article about supposed "murky" future for Oracle

From: Serge Rielau <>
Date: Thu, 01 Apr 2004 11:03:36 -0500
Message-ID: <c4heht$hh5$>

Thomas Kyte wrote:

> Serge Rielau <> wrote in message news:<c4fe11$lld$>...

>>Thomas Kyte wrote:
>>>So, tell me -- how do you do a report that gives -- oh say --
>>>financial data (and hence must sort of return an answer that actually
>>>existed in the database at some point in time) that consists of
>>>multiple queries -- each of which must be read consistent not only
>>>individually -- but with respect to eachother.
>>Wouldn't you have some sort of a cut off time? I mean you would look at 
>>all transactions that happened for a given time-period using a timestamp.

> you mean you'd have to effective date/end date each record
Don't come most transactions with one anyway? If I buy a hammer at HomeDepot doesn't their DBMS record a transaction at time x, pointing to the list of things I purchased?
If I want to know how many hammers were sold last week isn't is natural to select the transactions and have a predicate limiting the week?
> and train all developers and end uses how to properly query this data
> (thats 'hard')
> and make sure they do it consistently and correctly
> and write the code to maintain these fields
> and put up with the overhead of searching for the right records using
> sql each time

Overhead? I'd expect an index on the timestamp....

> sure -- you could do that. Or, you could just set your isolation read
> only or serializable (or using flashback, set your point in time to
> say "midnight" -- even if the report started at 12:15am) and run your
> queries.

I like flashback as noted earlier. I think its a cool exploitation of versioning.

>>If the DBMS kept running sums in the operational system then it buys 
>>itself a "writers blocking writers" problem in the normal operations and 
>>it wouldn't be normalized either.

> not saying they would keep running totals -- I'm saying "i need a
> report with accurate information"

Right and I believe you can get that without killing concurrency. The running sum was just an example of how to kill concurrency through inappropriate design for a serialiable/RR envrinment.
>>As a result the data that is being looked at is stable. RR/Serializable 
>>(if chosen) ensures no-one is trying to undo a previously committed 
>>transaction but it does not harm incoming new transactions.

> RR/Serializable blocks most incoming new transactions -- if that is
> not "harmful"....

No they don't, at least not in DB2. They will prevent updates to rows that the RR transaction looked at, or might be interested in. If of course the user is scanning the whole table with a cursor and doing joins procedurally then yes. But in this case your undo tablespace usage is going to shoot up as well, your bufferpool is going to thrash.. etc etc...
It always pays to be concise, not only for locking.

Note that I'm not saying at all that versioning is useless. It has it's usages and I see and accept most of your arguments, all I'm saying is that is it neither the silver bullet nor required for success. Just like, say, MDC is neither a silver bullet nor required. Versioning is a tool simplifying a problem, given you are willing to sacrifice at another point (such as having extra disc set aside for undo tablespaces and using a bigger buffer pool). I agree that flashback query is an elegant way of getting an accurate snapshot in time across your database.
I don't agree that versioning is the one and only for other usages. As you mentioned earlier users are frightingly ignorant to isolation levels.
A user can screw up using versioning as well as with ANSI dirty read, cursor stability or read stability.


Serge Rielau
DB2 SQL Compiler Development
IBM Toronto Lab
Received on Thu Apr 01 2004 - 10:03:36 CST

Original text of this message