Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: changing the isolation level

Re: changing the isolation level

From: Mark D Powell <Mark.Powell_at_eds.com>
Date: 15 Jan 2007 07:38:38 -0800
Message-ID: <1168875518.336302.285140@m58g2000cwm.googlegroups.com>

Marten Lehmann wrote:
> > The data changed by a session in Oracle can never been seen by another
> > session until it is committed. Why doesn't the session making the
> > change perform the verification test to begin with?
>
> Because it are two different applications. Currently, they are both
> written in Perl, but one of them is about to change to Java. So
> session-sharing is not possible.
>
> Our current process is easy: The first process that changes data is left
> open, the transaction is neither commited nor rolled back. Then I can
> perform tests on the data (and I know by design, that noone else is
> using the "dirty" data at the same time). If I'm happy with it, I commit
> the changes. If not, I will rollback them.
>
> Think of a script, that generates invoices montly. Lets say you have a
> average volume of $1000 dollar. So if a run of the script in the next
> month just returns a volume of $200 you can be pretty sure, that
> something has gone wrong. But if you cannot see this until you confirmed
> all the stuff, then it is too late to revoke it.
>
> Sybase supports all levels of isolation, so when I read that Oracle
> doesn't allow a certain isolation level it just sounds like a lame
> excuse for a missing feature. It should be up to the user which
> isolation level to use and if I want to read "dirty" data, then why not?
>
> Regards
> Marten

Most of our applications are designed understanding that other sessions may be updating the target data at the same time (OLTP). As such there is never a reason to read uncommitted data.

If we had to perform a process where we cannot apply the changes until a complex verification was performed and that verification could not be performed as part of the data extraction and update process then I suspect we would extract the target data into a work table. The verification would run against the work table and when successful the changes would be applied to the final target data.

Still knowing the developers I imagine that a long hard look for a way to extract, update, verify, and apply the data in a sinle step probably on some type of peicemeal basis: per customer, per unit, per sales order, per work order, etc... would have seen searched for in depth.

But you cannot expect an application written for any of the major databases to just be ported as is to any of the other major database products and that it will be able to work the same way due to the fact the read consistency and locking architectures are different. I have seen examples where the same SQL statements excuted by two sessions upon the same set of data produce different results between vendor products. You hae to re-evaluate how every single process is going to work under the new architecture. Porting most applications between databases is a major undertaking that requires that detail knowledge of how both the existing and the target rdbms work.

Good luck, you likely have a lot of work ahead. -- Mark D Powell -- Received on Mon Jan 15 2007 - 09:38:38 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US