Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Insert 1000s of row/sec, index performance problem

Re: Insert 1000s of row/sec, index performance problem

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: 2000/09/20
Message-ID: <969430938.8282.0.nnrp-08.9e984b29@news.demon.co.uk>

Another though prompted by Guy's comments. is the relevant index on the table that is updated pre-fixed, so that the update doesn't have to 'probe' the indexes on the wrong partitions before finding the relevant partition - this could be where the excess reads are going (even though you mention they are on the insert).

-- 

Jonathan Lewis
Yet another Oracle-related web site:  http://www.jlcomp.demon.co.uk

    Guy Birkbeck wrote in message <39C7E53D.CFE97BC1_at_tandr.com>...
    More silly questions: 
      
    a..   Is the index global or partitioned in the same way the 
underlying table is? 
    b..   Have you checked for dynamic extension (on the tables, indices 
and RB segs)? 
    c..   How are your commits handled (do the servers commit the LUW or 
is committed 
         separately from B or C)? 
    d..   Are the FK's fully indexed in the children? (see point 1 
below). 
    
    Silly suggestions: 

    e..   If the app is now working properly, consider disabling FK and 
other 
         constraints. 
    f..   You may wish to run RULE based on the update, at the 
identified update rate 
          your stats won't be worth the memory they are cached in. 
    ;) guy 
    Jonathan Lewis wrote: 

        The index won't be 'too deep'. 
        Which insert is taking the 40 reads ?  You have 
        2 different inserts - is it the A insert or the B insert. 
        I would suspect the excessive reading is rollback 
        reading for consistency checking since the A 
        table is not partitions across Pro*C sessions. 

        What are the options for making table A 
        a two-column partition - (date, account number). 
        You have to be a bit careful with this though 
        as you may get hit by an excessive number 
        of partitions in a single object. 

        Silly question - are you confident the slow down 
        isn't occurring because you reach some sort of 
        checkpoint/dbwr plateau where the system 
        finally reaches its steady-state write rate ? 

        -- 

        Jonathan Lewis 
        Yet another Oracle-related web site:  
http://www.jlcomp.demon.co.uk 

        br_at_bitberry.com wrote in message 
<8q7jih$7kg$1_at_nnrp1.deja.com>... 

>Greetings.
>
>I am working on optimizing an application that has very high
performance
>requirements. The application bottlenecks have been eliminated,
and now
>we have a few left in the database.
>
>The application works in real-time (which is a must) and does
the
>following per request:
>
>* Insert one row in table A
>* Insert one or more rows in table B
>* Update one row in table C
>
>All tables only have a primary key and a few referencial
constraints, no
>additional indexes.
>
>Table A is partitioned on a date range, currently one partition
per
>month. Grows rapidly. Data is never updated, only appended.
Arrives
>almost sequential.
>
>Table B is partitioned on a number range (account number).
Grows
>rapidly. Data is never updated, only appended.
>
>Table C is partitioned on the same number range as C. Since
this table
>has many updates I use a buffer pool keep on it. Each row
contains 3
>numbers, but there are 25 million rows. It does not grow much.
We are
>considering using a hash cluster here.
>
>
>The application is a server written in Pro*C, using hostarrays
etc. It
>runs as multiple processes, and since table B and C are
partitioned on
>an account number range, we have implemented a mechanism in the
server
>insuring that the same server process always handles the same
account
>number range, which gives us the benefit that only one server
works on
>each partition (no battling for disk access).
>
>This all works very nicely. For a while. If we squeeze 750
requests
>through per second, we run stable for about 2 hours. Then the
inserts
>start taking too long, and stats show that Oracle performs as
much as 40
>reads per insert! It's the index that's the problem ofcourse,
my guess
>is that it becomes too deep.
>
>What we have considered is write the new data to a temporary
table and
>collect data for maybe 10 minutes. After that, the server will
write to
>another temporary table and another process will move the data
from
>the full temporary table to the "real" table using direct load
>(INSERT ... SELECT using the /*+ APPEND */ hint).
>
>Do you have experience with a system like this? How do you
achieve such
>a high number of inserts per second? Does the proposed solution
sound
>reasonable or do you have any other ideas?
>
>
>Looking forward to your comments.
>
>Best regards,
>Brian
>
>
>
>Sent via Deja.com http://www.deja.com/
>Before you buy.
--
Received on Wed Sep 20 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US