Re: For the GURUS - Load Process slowing down

From: Jerry Gitomer <jgitomer_at_erols.com>
Date: Wed, 7 Jul 1999 21:51:45 -0400
Message-ID: <7m11d5$h9n$1_at_autumn.news.rcn.net>


Hi Ozzy,

    I think that people are addressing the wrong question. If I understand correctly your problem is not in loading the raw data into the database, but in inserting and updating from the load table to the member table. You also stated that the problem is related to indexes and went on to mention that you were encountering performance problems with a 25,000 row table.

    It sounds like the problem is related to the need to reorganize the indexes as rows are added to the table. Unless you have specified otherwise Oracle indexes are a variation of B-tree. (Please note that I said the indexes -- not the data.) If an index entry is needed in an already full block Oracle splits the block in order to make room for the new entry. Since a B-tree is a multi-level structure it may also be necessary to split the index block at the next higher level as well.

    Look at the PCTFREE value for your indexes. If it is near zero drop the index, recreate the index with a PCTFREE value of 20, and then analyze the index. Run some timing tests and see if you get an improvement.

    The second thing to look at is your init.ora file parameters for db_blocks and sorting. The defaults are laughably low. The starting point for db_block_buffers should be the value Oracle calls "Large" and you should increase your SGA size considerably. (On an NT I like to start with an 18MB shared pool and 6400 block buffers.)

    Oops, I almost forgot, what are your indexes like? How many do you have on the table? How many bytes is each index entry? The reason I ask is that you may have "overindexed" your table. If you are running 8i and can allocate sufficient buffer space to keep the table in RAM even full table scans run surprisingly fast. So, based on how the table is used, it may be worthwhile to eliminate some of your indexes.

regards
Jerry Gitomer

>Ozzy wrote in message <7ltett$62g$1_at_hermes.is.co.za>...
>>Hi
>>
>>I have written a load process (using SPs and Packages) that
loads and/or
>>updates live 24hr member data. The problem is that as the
destination
>>member table grows, the load gets progressively slower.
>>
>>Basically what the process is doing, is loading data, via SQL
Loader, into
>a
>>member load table. It then compares the data in the load table
with data
>in
>>the live member table. If the member exists and the data
differs, it will
>>update, if there is no match, a row is added. As the table
grows, it takes
>>progressively longer to load the same amount of rows.
>>
>>I have spent a long time debugging and running tests and have
established
>>that the problem does not lie in the searches or the insertion
of data
>rows,
>>but in the growing indexes. Eg. I have removed all the
indexes (except
>for
>>the PK/Unique indexes) and the load becomes constant. As soon
as I put the
>>necessary indexes on, the load begins to slow. I have also
given the
>>indexes 500K and 1M extents and no difference is noticed.
>>
>>Is this a common problem? Will it reach a plateau and become
constant
>>eventually?
>>
>>I currently only working with about 25 000 rows.
>>
>>Any help in this regard will be greatly appreciated. (Maybe my
problem will
>>be solved when I go onto real serious hardware?)
>>
>>Thanks
>>Ozzy
>>
>>
>
>
Received on Thu Jul 08 1999 - 03:51:45 CEST

Original text of this message