Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: performance issue during loading

Re: performance issue during loading

From: Thomas Kyte <tkyte_at_us.oracle.com>
Date: Wed, 10 Mar 1999 19:00:13 GMT
Message-ID: <36efc077.22210356@192.86.155.100>


A copy of this was sent to dmjl0922_at_my-dejanews.com (if that email address didn't require changing) On Wed, 10 Mar 1999 13:07:33 GMT, you wrote:

>We are having a performance issue loading our fact table. We are only able to
>process 10 records per second when processing the PL/SQL code that transforms
>the fact table.
>
>Simple facts
>
>HP-UX 11
>Oracle 8.?
>2 CPUs
>1G RAM
>6 mirrored pairs disk
>fact file = 120Meg = 400,000 rows
>
>We load this data into a stage table first, and then use PL/SQL packages to
>validate business rules and load the fact table.
>
>We have only been able to achieve 10 records per second loading the fact
>table. The size of the file mentioned above is not relevant. A file of 150K
>rows loads just as slow. We do not have much data in the fact table yet
>either. We just started loading.
>
>Our PL/SQL is fairly straightforward, but does about 9 lookups into other
>tables for validation. We are thinking about making the smaller table
>lookups into array lookups. Will this help? Most of the lookups are on very
>small tables (< 20 rows). Two of the lookups are on larger tables (100K -
>200K rows). Are we doing too many lookups per record?
>

can you use declaritive integrity instead of lookups? declaritive check constraints + foreign keys will be much faster then procedural code any day.

can you 'parallelize' the plsql processing. can you kick off X scripts, each of which might be able to process 10 rows/second?

have you set timed statistics to true and enabled sql_trace and used tkprof to analyze the results (see server tuning manual for info on the above)....

>Also, the cursor on the stage table is implicit. Should this be explicit for
>perf reasons?
>
>We are also issuing a commit after each row. Should we change that to every
>100, or every 1000 rows?
>

make it more then 1. go for 1000 or more.

>All of the things I can think about are code related since I am not a DBA. We
>can tweak the code, but I am looking for an order of magnitude of improvement
>that may not be possible tweaking code.
>

you might just be able to run 10 at the same time and get the speed up. You'll have to "partition" your data somehow in the stage table by key or something to give each process something to work with.

>Does anyone have any other recommendations that may help, or any benchamrks to
>compare with? I think that 10 records a second is poor, even if we perform 9
>lookups.
>
>Other DB things we have tried
>We moved data files around disk to spread i/o issue.
>We have not yet used table partitioning.
>We are dropping indexes on fact table.
>A DBA has performed general DB tuning.
>
>
>Thanks for any help on this.
>
>
>
>-----------== Posted via Deja News, The Discussion Network ==----------
>http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
 

Thomas Kyte
tkyte_at_us.oracle.com
Oracle Service Industries
Reston, VA USA

--
http://govt.us.oracle.com/ -- downloadable utilities  



Opinions are mine and do not necessarily reflect those of Oracle Corporation Received on Wed Mar 10 1999 - 13:00:13 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US