Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Article about supposed "murky" future for Oracle

Re: Article about supposed "murky" future for Oracle

From: rkusenet <>
Date: Thu, 1 Apr 2004 10:09:02 -0500
Message-ID: <c4hb6k$2gi5gl$>

"Daniel Morgan" <> wrote in message news:1080798723.472164_at_yasure...
> > Speaking for myself I like using database tables for
> > work flow. It is a proven technology and IMO no database other
> > informix work as well. Reason being that informix concept of non
> > logged database is unique. It virtually eliminates any disk activity
> > except writing to physical once per checkpoint. Of course you will
> > lose data if a crash happens, but then u don't care either.
> Perhaps you don't care if you lose data. I do. My bank does. My
> government does. Heck even my mother does. ;-)

Please do not argue for the sake of argument. The non logged database is only for work flow tables, which u say AQ series of Oracle can do. Thomas Kyte mentions that it can be done in memory too without any disk write. This is what we are doing (or something very very similar).

Of course where we require full data intgrity, we do use logging. Isn't that a basic requirement.

> > The only advantage I see of using a Queue series is that the application
> > process need not poll the table. They can be called on events. But then
> > with multiple processes polling the work flow table, the wait time is
> > almost negligible. Actually we can convert it into an event by attaching
> > a table trigger to a C code.
> If you write C that is an entirely different matter as that is
> not functionality native to the product.

Native to what?? How is a function written in C and registered as a database function any less native than say using AQ series.

> Try looking at it another way. A lock in a block read into memory
> is not a disk write. Locking 1000 records is not writing 1000 times
> to disk. And in Informix you can run out of record level locks.
> Then what happens? You get lock escallation and start locking pages.


I thought u worked with Informix. Informix does not do lock escalation. Number of locks is controlled by LOCKS parametere. A LOCK requires 44 bytes of memory. So 100000 LOCKS will translate to 44MB of RAM. If a rogue query is issued which consumes more locks than this, Informix will dynamically allocate more locks. Depending on version, the max number of locks it can allocate is large. I think in my version it can go up to 1 million locks. Then it will run out of locks and the query will be rolledback. But then if i have to issue such a query I would lock the table itself. Quicker.

One thing I have noticed is that all examples given by Oracle folks here to prove their point involves taking a rare case like humungous query. How many of us would be issuing update command involving thousands of rows during peak production hours to a heavily used table. In case of Oracle, since it has to write twice to the block, once for lock and once for the data update, I am not sure how much performance impact it would be. IIRC you once mentioned here that it will be extremely stupid to do a massive select for update.

BTW this document points out some flaws in Oracle's lock implementation.

Of course it can very well be a marketing hype from IBM. I have as many reasons to believe their spin as that of Oracle. Either both are self centered or both aren't. Received on Thu Apr 01 2004 - 09:09:02 CST

Original text of this message