Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Article about supposed "murky" future for Oracle

Re: Article about supposed "murky" future for Oracle

From: Thomas Kyte <>
Date: 31 Mar 2004 04:49:38 -0800
Message-ID: <>

"rkusenet" <> wrote in message news:<c4d7qd$2h6qf9$>...

> "Thomas Kyte" <> wrote in message

> > "rkusenet" <> wrote in message news:<c4cifo$2h5tbc$>...
> > > "Thomas Kyte" <> wrote
> > >
> > > > Well, I didn't prove that really -- the point I made on page 35 was
> > > > that
> > >
> > > ha ha. this is getting better. So what Daniel Morgan has been doing
> > > is blindly believing oracle marketing hype. Not very smart of him.
> > >
> >
> > I would hardly call it marketing hype. I know how the other databases
> > work, I've worked with them as well
> The 'hype' I was referring to was the statement by Daniel Morgan that
> "it is IMPOSSIBLE to get a read consistent view for a large set of rows"
> in sqlserver. Note the word impossible.

That's certainly not the way it read to me anyhow -- it read to me that "dan read a book by an Oracle guy, dan believed everything in book, dan has been fed a bunch of incorrect Oracle marketing hype".

There is no marketing hype in my books, just technical information.

And I pointed out that other RDBMS's can get consistent result sets -- at the price of

o deadlocks between readers and writers
o readers blocking writers
o writers blocking readers

> I am sure that few things can be done in Oracle, impossible in other products.
> But I don't think this is one.

> > the problem is -- unless it holds the lock until it resolves the
> > entire query (or locks all rows up front) you are *not* getting a read
> > consistent result set. (and if it does hold the lock, i would hardly
> > call that "very short" if it takes 5 minutes for my query to execute)
> ..
> deleted.
> What u have mentioned is true. no dispute there. The momentary lock
> is obviously dependent on the number of rows. Based on my experience

the problem here is that you (after chopping out the example) are ascribing to the SCROLL cursor a feature it does not *have*

A SCROLL cursor does *not* return a read consistent result set. A SCROLL cursor does *not* lock the data.

You would have to be using repeatable read or higher in Informix to get a read consistent result set.

All a scroll cursor does is copy the result set into a temporary table and give it a primary key so you can goto "row N" in the result set and scroll backwards and forwards through it.

A SCROLL cursor does *not* return a read consistent result set.

WITH HOLD just says "please make the temporary table be 'on commit PRESERVE rows' instead of 'on commit DELETE row'" It does not make it read consistent either.

Your posting said "Informix does read consistent result with this style of cursor"

It does *not*

That is my point here. I hope you do not have code that relies on this.

If you are not using repeatable read (gets shared read locks and KEEPS them until you commit -- which again would mean you are committing for the purpose of the DATABASE, not because your transaction was done or anything like that) you are *not* getting read consistent results.

scroll/no scroll
hold/no hold

they are not relevant -- they are programatic devices, they have nothing to do with the manner in which the data is retrieved consistency wise.

> a properly tuned system can return read consisten rows for many
> thousands of rows within few seconds. Anything more than that, it is
> definitely a problem and has to be solved at the application level.

not the way you describe in Informix (read consistent wise). Unless Informix locks the data as it reads it -- and keeps the locks until it has read the very last row and exhausted the result set -- it is not getting read consistent results.

SCROLL and WITH HOLD do not provide that.

Repeatable Read -- that would
Serializable -- that would


> > and how would a temp table (which is what a scroll cursor in informix
> > is doing under the covers -- copying the data off to the side) solve
> > anything?
> one obvious difference between an INTO TEMP and SCROLL CURSOR is
> the parallelism in writing. INTO TEMP is parallel written into different
> dbspaces, increasing the performance by an order of magnitude. That's why
> I prefer INTO TEMP for a read consistent query.

sorry -- but you are not getting a read consistent query there. You are getting whatever data happened to have been committed by the time your query got around to getting it.

Unless of course you use repeatable read or serializable.

This is the crux of the discussion.

> However, as Serge has pointed out effectively, versioning comes at a cost.
> I don't have to look any further than my company where we use some tables
> as work flow tables. Some tables are used to only hold results from
> different processes inserting rows asynchorously. Almost like using objects
> in memory for work flow queue. it is our requirement
> that rows are displayed on the fly. Meaning, as the user is munching first
> set of rows, more rows is being pumped into the table. Using an unlogged
> database, with dirty read (each session queries its data only), we get
> very efficient performance. We just don't care about the transaction state
> because the data is only transient. Effectively the data is written and
> read directly in memory (BUFFERS).
> I would like to know how this can be done as efficiently in Oracle. This
> is not a troll. Based on the feedback I recvd from various sources, it
> seems not.

In Oracle we call this AQ (advanced queueing). IBM would call it MQSeries, MS would call is MSMQ. We can do in memory queueing (no log) or stable queues based on disk storage -- it would be up to you.

It is quite efficient. Received on Wed Mar 31 2004 - 06:49:38 CST

Original text of this message