Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Re Returning result of a simple "select * " from a stored procedure without using cursor

Re: Re Returning result of a simple "select * " from a stored procedure without using cursor

From: Joseph Weinstein <bigdog_at_bea.com>
Date: Tue, 21 Oct 2003 10:31:15 -0700
Message-ID: <3F956D62.E9479254@bea.com>

VC wrote:

> Hello Joe,
>
> [VC]
> > > > Please elaborate why you are calling a rather simple tool (select for
> > > > update) a 'work-around'. The statement is used for what is known as
> > > > pessimistic locking. One can resort to a technique called optimistic
> > > > locking, in WEB environment, in order to avoid using 'select for
> > > > update' if need be.
> > >
> [JW]
> > > I call it a workaround because it requires non-standard SQL, which our our
> > > product handles in code generation for EJBs etc. If I were boss, I'd ask
> > > that oracle implemented SERIALIZABLE isolation by locking read data.
>
> 1. Please elaborate on why it's necessary to "lock read data" in the
> SERIALIZABLE mode.

Many customers want to write their transactions with the surety that no data involved in the tx will change while the tx is underway. They are regularly shocked to hear that repeated reads are satisfied from a possibly stale copy, and that some other transaction may have changed and committed some of the data the current tranaction has read, and they *don't* want to complete the full transaction, only to be told at commit time that "things have changed since you were last reading real data, so you're being rolled back. try again"

> 2. Obviously, the standard is not adequate to solve various practical
> transaction issues. Otherwise, why would a locking scheduler have a
> plethora of non-standard, so-called locking hints (HOLDLOCK, NOLOCK,
> PAGLOCK, READPAST, ROWLOCK, ... etc.) ? One cannot seriously hope to
> design a real OLTP system without using some of those. Would not you
> agree that Oracle's 'non-standard' select for update is a more elegant
> way to modify transactional behaviour in comparison to the above ?

Actually, I agree, but a real SERIALIZABLE, fully pessimistic locking mode would be best in my personal opinion. In fact, what I'd like is to be able to set and reset the isolation characteristics for each statement in a transaction. I'd like to start at any given isolation level, and be able to say: select this data with no lock. select this other data with a read-lock, update this row with an exclusive transactional lock, and update this other row without any transaction-level lock etc.

> 3. As I've already mentioned, in a stateless environment (Web
> applications), a correct approach would be using optimistic locking
> thereby make the 'select for update non-standardness' a moot issue.

Certainly any feature or weakness is moot in a given circumstance, and I do like the idea of readers not locking. It is faster in most cases. It is only when people are talking about serializable transactions that it seems 'not cricket'.

> [VC]
> > > > Oracle currently does not implement the SERIALIZABLE isolation level
> > > > in the strict sense of the word. What is known as the Oracle
> > > > serializable isolation level, is commonly considered to be the
> > > > SNAPSHOT IL.
> > >
>
> [JW]
> > > I agree, but many oracle customers seem to be surprised when I describe
> > > what actually happens, and the odd bit of dissembling that oracle employs
> > > in accomplishing a repeatable read.
>
> Please clarify what you mean by the above. Oracle's SERIALIZABLE does
> ensure repeatable reads without resorting to 'select for update'.

Oracle's SERIALIZABLE mode fakes repeatable reads by copying the data when it is first read, and then answering from the copy if a read is repeated during the transaction. Most people I've described this to are shocked. They expect the repeat read to be the *real* data. Ie: they expect it to have been locked.

> [VC]
> > > > This was true under some older Oracle versions. However, the bug was
> > > > fixed in May 2002 (patchset 8.1.7.4) and it's hardly fair to discuss
> > > > the database behaviour in the context of a fixed software bug. If
> > > > you're saying that the bug is still there, please post (or send me
> > > > via e-mail) a reproducible test case. I'd very much interested to
> > > > learn from your experience.
> > >
> > > Ok! It was in all oracle versions we'd dealt with from '96 till it was fixed,
> > > and it was known for years. I believe the recent "fix" is simply to throw an
> > > exception, failing the transaction in that case. The failure message is not so
> > > explicit as to say "we can't complete this tx because your insert caused an
> > > index block to split, and we won't be able to find the row(s) you inserted".
> > > We get the generic 8177 error. I will attach a SQL-PLUS script and jdbc
> > > program that you can use to duplicate the problem, and please let me know
> > > what it does for you. Doing it in Java using oracle's thin driver did give
> > > an 8177 on the 110th transaction. This is the "cannot serialize this transaction"
> > > message, which I have to explain to customers when they ask why the DBMS
> > > cannot serialize a tiny transaction that is just like the other hundreds they've done
> > > successfully, especially when there's only one user logged in to the DBMS.
> > > I'm not sure what fairness we're talking about. My discussing it demonstrated
> > > a history of experience with Oracle, long, and in this case painful, but truth,
> > > though anecdotal, can't be unfair.
>
> I ran both the original version verbatim and the more compact
> electrons-saving one, and could not reproduce under Oracle 9.2:
>

That's excellent news, but just to be sure, could you end the script by verifying that all rows were updated to have a 'val' of 200? The original bug, which can be demonstrated in 7.3.4, is that your script would complete in this exact way, but would have silently failed to update a few of the rows, and they would be in the DBMS, committed with a val=100! In fact, depending on the block size, it might take more of such tx's to hit on the problem. If you could bump your loop to 500 rows, it might be good. I appreciate your help in this.

   As you have shown in the other post, Oracle 8 will fail this script. This is because of the "fix" to the original problem, at least in that version of Oracle, is to fail an innocent transaction when it unknowably was in the circumstances that caused the original silent partial tx.

> =====================
> SQL*Plus: Release 9.2.0.4.0 - Production on Tue Oct 21 06:41:35 2003
>
> Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
>
> Connected to:
> Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
> With the Partitioning, OLAP and Oracle Data Mining options
> JServer Release 9.2.0.4.0 - Production
>
> SQL> create table isolationtest (id char(250) primary key, val float);
>
> Table created.
>
> SQL> begin
> 2 for i in 1..200 loop
> 3 SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> 4 insert into isolationtest (id,val) values (i,100);
> 5 update isolationtest set val = 200 where id = i;
> 6 COMMIT;
> 7 end loop;
> 8 end;
> 9
> 10 /
>
> PL/SQL procedure successfully completed.
>
> SQL> select count(*) from isolationtest;
>
> COUNT(*)
> ----------
> 200
>
> SQL> quit
> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.4.0 -
> Production
> With the Partitioning, OLAP and Oracle Data Mining options
> JServer Release 9.2.0.4.0 - Production
> =================================
>
> [VC]
> > >
> > > > Whilst I am unhappy myself about the Oracle level of 'bugginess' and
> > > > the amount of time it takes to fix bugs, I was bitten more often by
> > > > the locking scheduler behaviour in the READ COMMITTED mode. It comes
> > > > to one as quite a surprise when one discovers that, under the locking
> > > > scheduler, aggregate statements can return inconsistent data in the
> > > > RC mode, especially if one is accustomed to always seeing statement
> > > > level consistent data
> > >
>
> [JW]
> > > I understand. There are many innocent expectations dashed when tested
> > > against something other than one's 'mother DBMS'. After enough pain
> > > and experience going one way and the other, one tempers one's initially
> > > religious ferver in the defense of one's first DBMS, and develops a
> > > careful palette of mixed trust and dismay for such products.
>
> Let us skip the 'surprise' part and allow me to re-phrase my original
> statement. In the READ COMMITTED mode, the locking scheduler
> aggregate functions (sum(), avg(), etc.) may return incorrect data,
> e.g. 'select sum(value) from t1' may produce an incorrect result. As
> far as I know, there is no satisfactory solution to this problem. An
> obvious one causes severe concurrency degradation and I'd be very much
> interested to know whether alternative approaches exist.

Running that query in serializable mode should lock all the data, but unless the whole table is locked, any inserts that occurred in the interim could change the truth. MS has other simpler problems, in that it (to my last info) collects the running sum as the same numeric type as the column, and if the sum of the data exceeds the capacity of the column type, you can get an overflow, even if you are asking for the *average*, which they compute by dividing the sum by the number. They should instead obtain the count first, and then make a sum of each value/number (in a floating point format).

Joe Received on Tue Oct 21 2003 - 12:31:15 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US