Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.misc -> Re: IOT, memory and transaction time
On Apr 18, 11:19 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> On 18.04.2007 11:59, d..._at_2bytes.co.uk wrote:
>
>
>
> > On Apr 18, 10:36 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> >> .
> >> post
> >> top
> >> not
> >> do
> >> Please
>
> >> On 18.04.2007 10:51, d..._at_2bytes.co.uk wrote:
>
> >>> ok, I've discovered that it is the Update of the Dummy table that is
> >>> the problem. It doesn't cause a problem in SQL Server but seems to be
> >>> a massive problem in Oracle. Changing the Update statement to "Update
> >>> DUMMY Set Col1 = 1" without the where clause eliminates the problem
> >>> but obviously does not fulfill the requirement. Why does the where
> >>> clause "...where @lastRead = (select max(versionNo) from Table1 where
> >>> id = @id and versionNo <= @currentTransNum)" cause such a major
> >>> problem with a seemingly endlessly increasing transaction time?
> >>> What is a better way of structuring this SQL for Oracle?
> >>> The client code already assumes a rowcount based on an update
> >>> statement so I cannot change to a read statement, it still needs to be
> >>> an update (so I am told, though I have not seen the client app code
> >>> myself)
> >> Frankly, I am more and more confused. You cannot change the application
> >> and you do not have access to the code but you can change SQL? Also,
> >> your changed logic of using those tables does not interfere with the
> >> supposedly fixed app logic?
>
> >> Maybe it's easier to start over with a list of requirements and fixed
> >> interfaces and a description of what you want to do. At least I do not
> >> see your problem clear enough that I would dare to come up with suggestions.
>
> >> Regards
>
> >> robert
>
> > Yep I can change the SQL but the app is written in LISP and is too
> > unwieldy with not enough expertise available for enough time to change
> > it. I have had fairly minor changes made to the LISP code to deal with
> > this new logic.
>
> > It seems to be a fairly simple problem now - the old system did an
> > update of the old versions in Table1 WHERE id = @id and deleted = -1
> > and versionNo =@lastRead - worked fine.
> > The new system does an Update of the dummy table where @lastRead =
> > (select max(versionNo) from Table1 where id = @id and versionNo <=
> > @currentTransNum) and gets progressively slower after every
> > transaction as the table fills.
>
> > This is where I expected it to use the secondary index. I have not
> > been able to trace yet what it is actually doing as we do not have
> > Oracle expertise available at the moment but certainly SQL Server
> > works fine with this new SQL. The obvious difference is that SQL
> > Server uses dynamic memory allocation whereas Oracle, in our tests,
> > only had about 180MB allocated. We are going to do another test with
> > more memory but is the SQL optimized? Should it be re-written? As I
> > say, in our test, we were adding 20,000 records to 2 separate tables
> > in each transaction, so the tables were growing quite fast.
>
> I don't know where you get that from but you might have to manually set
> / tune storage parameters for Oracle - depending on the version.
>
> > Can you confirm to me how to set Auto Update Statistics on?
>
> There's a ton of things to explain about Oracle but you do not even
> mention your version. There is nothing like auto update statistics in
> Oracle. If you're on 10g you can fairly easy define a scheduled job
> that updates statistics once in a while. But be aware of a bug that
> caused not all stats to be updated properly (see metalink).
>
> Frankly, I believe since you're so much in a hurry - you're in trouble.
> If not today then at a later point in time where even more
> installations will be out there. I think at some point in time you have
> to do this properly - rather sooner than later. And I think you need to
> dig a bit deeper into Oracle - IMHO DB tuning requires some level of
> insight.
>
> Regards
>
> robert
well the problem is that the 2 old versions had pretty much the same schema and same SQL and Oracle was always quicker.
The premise at the moment is that the Data Access Layer is uniform, so an Update statement is required that checks whether the version the user read is still the latest version. But we no longer want to update the actual records of the table we are reading, hence the dummy table.
Second, the logic has now changed, mainly to compromise the needs of SQL Server where readers block writers etc, so that to find the latest version, unlike the old logic where all you needed to do was to look at the deleted column for a specific Id to check it was still -1, now you have to do a select max(versionNo) for the Id.
It doesn't look like a massive change in terms of SQL but clearly it is causing us a problem. I guess the simple question is "Is their a more optimal way of writing this Update statement?". If not, I have 2 choices, a) find a way to make this statement work - which may just mean a bit more memory or b) change the logic.
If it is just a memory issue, it might be a problem depending on how much extra memory is being required by this new logic.
Once again, thanks for your input.
Phil
Received on Wed Apr 18 2007 - 05:37:34 CDT
![]() |
![]() |