Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle performance on large loads

Re: Oracle performance on large loads

From: Jared Hecker <jared_at_planet.net>
Date: 1996/12/16
Message-ID: <59467i$euj@jupiter.planet.net>#1/1

Mike -

My first thought is why not load everything into a 'master' table, then use triggers and stored procedures to make a single pass through and update the requisite target tables. This would let you use sqlloader in direct mode (and parallel mode if you're on an SMP system) without giving up the flexibility you want in breaking up the records and verifying them independently. Once the records are in the database processing is much faster, assuming the db is not resource constrained.

For example, two years ago I participated in an evaluation of various hardware platforms for Oracle VLDB's. On an eight-processor Sequent (using 66-MHz 486's) with 1GB of physical memory and 120GB of disk, we parallelized a sqlload run to load 60GB of disk in just under ninety minutes. Subsequent indexing (done in parallel) - not just PK's but a real-life set of indexes - took just over two hours to run. Granted, YMMV depending on you rplatform and requirements, but don't fight the database - let the kernel do as much of the work as possible. As Art Carney said in that immortal epic 'Roadie',"Everything'll work, if you let it."

hth -

Regards,
jh

thielm (thielm_at_ix.netcom.com) wrote:

: Does anyone have any experience with loading very large amounts of
: fairly complicated data
: and if so does 140 hours seem normal ? I've tried SQL Loader but because

--
Jared Hecker              |  ** HWA, Inc. **   Oracle and Sybase
jared_at_hwai.com            |    database architecture and administration
76276.740_at_compuserve.com  |    - serving the NYC/NJ region -
Received on Mon Dec 16 1996 - 00:00:00 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US