Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Article about supposed "murky" future for Oracle

Re: Article about supposed "murky" future for Oracle

From: Daniel Morgan <>
Date: Fri, 02 Apr 2004 22:26:05 -0800
Message-ID: <1080973548.988810@yasure>

rkusenet wrote:

> "Thomas Kyte" <> wrote
>>>One thing I have noticed is that all examples given by Oracle folks
>>>here to prove their point involves taking a rare case like humungous query.
>>not so. my example used 3 rows - 3 rows.
>>Your choices in read lock databases:
>>a) use read committed and get the wrong answer
>>b) use repeatable read/serializable and either
>> 1) deadlock with an update transaction, one of you loses
>> 2) block the update for a period of time or get blocked and get the
>> answer
>>three rows.
> The point I am referring is not what u have described above. What u have
> mentioned above only proves that a query with an incorrect isolation level
> will give incorrect result, something which is too obvious to even mention,
> let alone harp.
> What I meant was that, in Informix unless the query involves a huge
> set of rows, the limitations of locking (R blocking W and W blocking R)
> does not become an impediment in day to day operation.

In tests conducted at Boeing we were able to prove, and rather conclusively I might add, that databases that could have a problem with a large number of rows, or transactions, or users, could also have them with three or more.

If your statement is just a matter of playing the averages ... it should be acknowledged as such. I do not believe you can demonstrate on any RDBMS a problem that only happens with 'a huge set of rows'.

> I believe that in any properly designed application using Informix, a
> good designer will incorporate features to minimize such problems. Everyone
> does it. Not too long ago, Oracle developers and DBAs use to solve
> "snapshot too old" problem thru application design only. Just search on
> that problem in this newsgroup.

Not those that read Tom Kyte's advice. ;-)

The point I think being made here is that there are some problems with Informix, and SQL Server, and Sybase, that can not be overcome by 'good design' unless one considers 'good design' to equate with some combination of locking records, pages, and/or tables.

> Admitted Oracle is idiot-proof when it comes to isolation level, fine -
> more power to oracle. To me it doesn't matter. Why? Bcos everything
> comes with a price. To give a simple example. Java is much more idiot proof
> than C/C++, as in array bounds check. We all know where they stand in performance :-)

But we also know that performance without integrity is worthless. Who wants the database that takes their data on a faster ride into the toilet?

My list of priorities is as follows:

1. Security
2. Data Integrity
3. Stability
4. Scalability
5. Performance

I don't care how fast it is if the data is stolen, corrupted, the machine crashes, and I can't get my users on the system.

But even if you wish to cry performance, performance, performance ... where are the benchmarks that prove Informix isn't a sloth on a CD?

>>how many of us run reports on a system that aggregates data whilst
>>others are modifying it?
> many. but informix does not have problem with this. the problem
> comes only when the set of rows is huge and that's where a proper
> design comes into question to minimize or even eliminate this.

What is the definition of 'set of rows is huge'? Is that documented somewhere? It should be so that people using Informix will know that they need to move to Oracle or some other product that can handle it.

I think we both know that the problem is not the number of rows but rather the fact that the longer something takes the more likely a problem will occur. This isn't rocket science ... it is just the exposure of a weakness for a prolonged period of time. The law of averages just catches up.

>>Worse yet, how many end users understand that what they are seeing
>>could be something that never existed.
> isn't this true for all products. I mean, how many of users actually
> verify that a report from a complex report involving multi table
> join, correlated sub queries and partitioning is actually correct.


> What if Oracle's engine has a rare bug under these conditions. We all
> take for granted that an enterprise product like Oracle/Informix/SS
> will not have such bugs, right? If Oracle was so perfect, then the
> perhaps the need of Metalink (or whatever bug tracking site) will
> not arise at all.

Bug does not equate with data integrity in the context in which it is being discussed here. If you wish to travel this route I will acknowledge that a cosmic ray might strike my hard disk just so that that doesn't become an issue too.

> I repeat don't blame the product for bad coding.

The issue being discussed as I've followed it is not one of bad coding but rather a product that proposes only a single workaround for a weakness ... lock and serialize.

>>only if you select for update -- which if you want to not have "lost
>>updates", you might consider doing in some but not all cases. Like
>>you say -- it depends.
> true. so is the need for read consistent queries. As a matter of fact,
> from my experience, few queries require that level of consistency. The
> reason why I like Informix's approach is that isolation level is
> provided for every type of scenario. And so does SS and DB2. Use it
> appropriately.

But at a cost. Which is, as I recall, that one being reads block writes and writes block reads.

Daniel Morgan
(replace 'x' with a 'u' to reply)
Received on Sat Apr 03 2004 - 00:26:05 CST

Original text of this message