Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Performance issue while loading large amount of data
"Noons" <nsouto_at_optusnet.com.au.nospam> wrote in message
news:Xns930171DE5FD11mineminemine_at_210.49.20.254...
> Wanderley <wces123_at_yahoo.com> wrote in
> news:LHZT9.34340$Pb.923348_at_twister.austin.rr.com and I quote:
>
> >
> > I agree. As always, practice makes perfect. You have to find the sweet
> > spot between no commits at all (which would require very large rollback
> > segs) or too many commits (which would slow down your job).
>
>
> Let's make something clear here. A large transaction size
> (infrequent COMMIT) on an INSERT does NOT necessarily imply
> the need for very large rollback segments.
>
Well, it implies getting on for 18 bytes per row, since the rowid of the record about to be inserted is stored in rollback. Small, perhaps. But not nothing.
>
> Rollback segments are hardly used for mass loads into a table.
> The only case where that would be true would be if the table
> was already indexed, in which case the rollback stuff would be
> on the index blocks.
>
>
> A pure INSERT hardly uses any rollback at all, and then only
> for space management at dictionary level (recursive SQL).
>
Er, I could be wrong, but see above: there *is* some rollback generated on the table being inserted into. The rollback generated by an insert is not just for the recursive stuff.
Regards
HJR
>
> > Some methods are faster than others, though. For instance, depending on
> > your version of Oracle and the kind of data you are loading (from flat
> > files, from binary files, from other databases, etc), you could use
> > direct load (sqlloader).
>
>
> Exactly.
>
>
> --
> Cheers
> Nuno Souto
> nsouto_at_optusnet.com.au.nospam
Received on Sun Jan 12 2003 - 03:26:55 CST