Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Performance issue while loading large amount of data
Wanderley <wces123_at_yahoo.com> wrote in
news:LHZT9.34340$Pb.923348_at_twister.austin.rr.com and I quote:
>
> I agree. As always, practice makes perfect. You have to find the sweet
> spot between no commits at all (which would require very large rollback
> segs) or too many commits (which would slow down your job).
Let's make something clear here. A large transaction size (infrequent COMMIT) on an INSERT does NOT necessarily imply the need for very large rollback segments.
Rollback segments are hardly used for mass loads into a table. The only case where that would be true would be if the table was already indexed, in which case the rollback stuff would be on the index blocks.
A pure INSERT hardly uses any rollback at all, and then only for space management at dictionary level (recursive SQL).
> Some methods are faster than others, though. For instance, depending on
> your version of Oracle and the kind of data you are loading (from flat
> files, from binary files, from other databases, etc), you could use
> direct load (sqlloader).
Exactly.
-- Cheers Nuno Souto nsouto_at_optusnet.com.au.nospamReceived on Sat Jan 11 2003 - 18:24:58 CST