Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Performance issue while loading large amount of data

Re: Performance issue while loading large amount of data

From: Noons <nsouto_at_optusnet.com.au.nospam>
Date: 12 Jan 2003 00:24:58 GMT
Message-ID: <Xns930171DE5FD11mineminemine@210.49.20.254>


Wanderley <wces123_at_yahoo.com> wrote in
news:LHZT9.34340$Pb.923348_at_twister.austin.rr.com and I quote:

>
> I agree. As always, practice makes perfect. You have to find the sweet
> spot between no commits at all (which would require very large rollback
> segs) or too many commits (which would slow down your job).

Let's make something clear here. A large transaction size (infrequent COMMIT) on an INSERT does NOT necessarily imply the need for very large rollback segments.

Rollback segments are hardly used for mass loads into a table. The only case where that would be true would be if the table was already indexed, in which case the rollback stuff would be on the index blocks.

A pure INSERT hardly uses any rollback at all, and then only for space management at dictionary level (recursive SQL).

> Some methods are faster than others, though. For instance, depending on
> your version of Oracle and the kind of data you are loading (from flat
> files, from binary files, from other databases, etc), you could use
> direct load (sqlloader).

Exactly.

-- 
Cheers
Nuno Souto
nsouto_at_optusnet.com.au.nospam
Received on Sat Jan 11 2003 - 18:24:58 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US