Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: IOT, memory and transaction time

Re: IOT, memory and transaction time

From: <deja_at_2bytes.co.uk>
Date: 18 Apr 2007 02:59:51 -0700
Message-ID: <1176890391.832467.271500@y5g2000hsa.googlegroups.com>


On Apr 18, 10:36 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> .
> post
> top
> not
> do
> Please
>
> On 18.04.2007 10:51, d..._at_2bytes.co.uk wrote:
>
> > ok, I've discovered that it is the Update of the Dummy table that is
> > the problem. It doesn't cause a problem in SQL Server but seems to be
> > a massive problem in Oracle. Changing the Update statement to "Update
> > DUMMY Set Col1 = 1" without the where clause eliminates the problem
> > but obviously does not fulfill the requirement. Why does the where
> > clause "...where @lastRead = (select max(versionNo) from Table1 where
> > id = @id and versionNo <= @currentTransNum)" cause such a major
> > problem with a seemingly endlessly increasing transaction time?
>
> > What is a better way of structuring this SQL for Oracle?
>
> > The client code already assumes a rowcount based on an update
> > statement so I cannot change to a read statement, it still needs to be
> > an update (so I am told, though I have not seen the client app code
> > myself)
>
> Frankly, I am more and more confused. You cannot change the application
> and you do not have access to the code but you can change SQL? Also,
> your changed logic of using those tables does not interfere with the
> supposedly fixed app logic?
>
> Maybe it's easier to start over with a list of requirements and fixed
> interfaces and a description of what you want to do. At least I do not
> see your problem clear enough that I would dare to come up with suggestions.
>
> Regards
>
> robert

Yep I can change the SQL but the app is written in LISP and is too unwieldy with not enough expertise available for enough time to change it. I have had fairly minor changes made to the LISP code to deal with this new logic.

It seems to be a fairly simple problem now - the old system did an update of the old versions in Table1 WHERE id = @id and deleted = -1 and versionNo =@lastRead - worked fine.
The new system does an Update of the dummy table where @lastRead = (select max(versionNo) from Table1 where id = @id and versionNo <= @currentTransNum) and gets progressively slower after every transaction as the table fills.

This is where I expected it to use the secondary index. I have not been able to trace yet what it is actually doing as we do not have Oracle expertise available at the moment but certainly SQL Server works fine with this new SQL. The obvious difference is that SQL Server uses dynamic memory allocation whereas Oracle, in our tests, only had about 180MB allocated. We are going to do another test with more memory but is the SQL optimized? Should it be re-written? As I say, in our test, we were adding 20,000 records to 2 separate tables in each transaction, so the tables were growing quite fast.

Can you confirm to me how to set Auto Update Statistics on?

thanks for all your help Received on Wed Apr 18 2007 - 04:59:51 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US